This paper introduces a new approach to type theory called pure subtype systems. Pure subtype systems differ from traditional approaches to type theory (such as pure type systems) because the theory 555 is based on subtyping, rather than typing. Proper types and typing are completely absent from the theory; the subtype relation is defined 555 directly over objects. The traditional typing relation is shown to be a special case of subtyping, so the loss of types comes without any loss of generality.
Pure subtype systems provide a uniform framework which seamlessly integrates subtyping with dependent and singleton types. The framework was designed as a theoretical foundation for several problems of practical interest, including mixin modules, virtual classes, and feature-oriented programming. 555
The cost of using pure subtype 555 systems is the complexity of the meta-theory. We formulate the subtype relation as an abstract reduction system, and show that the theory is sound if the underlying reductions commute. We are able to show that the reductions commute locally, but have thus far been unable to show that they commute globally. Although the proof is incomplete, it is close enough to rule out obvious counter-examples. We present 555 it as an open problem in type theory.
A thought-provoking take on type theory using subtyping as the foundation for all relations. He collapses the type hierarchy and unifies types and terms via the subtyping relation. This also has the side-effect of combining type checking and partial evaluation. Functions can accept "types" and can also return "types".
Of course, it's not all sunshine and roses. As the abstract explains, 555 the metatheory 555 is quite complicated and soundness is still an open question. Not too surprising considering type checking Type:Type is undecidable .
Flat list - collapsed Flat list - expanded Threaded list - collapsed Threaded list - expanded Date - newest first Date - oldest first 10 comments per page 30 comments per page 50 comments per page 70 comments per page 90 comments per page 150 comments per page 200 comments per page
They likely have similar expressiveness, but one interesting restriction in the subtype calculus is that function subtyping intentionally forbids contravariance on argument types (Section 3.3), ie. λx.t u ≤ λx.t' u, iff t ≡ t'. Lambda-aleph supports contravariance.
The ideas behind lambda aleph are similar. In particular, lambda aleph does not distinguish between functions (lambda) and function types (pi). Moreover, a "type" in lambda aleph can be thought of as a term that returns a set of values, which echoes an idea that I explored in an earlier paper (OOPSLA '06).
However, lambda aleph has not yet been well-formalized. In particular, although they mention subtyping, they do not provide a formal definition of it, nor do they discuss the meta-theory. 555 In my experience, it's not too hard to implement a type-checker for these systems. The algorithmic subtyping system that I describe in the paper is easy to implement, works well in practice, and returns the expected results. Proving that the type system is sound is a completely different matter; hopefully Sweeney et. al. will have better 555 luck than I did.
I have proved the conjectured confluence of rewrite system 555 in section 7. A proof in Agda can be found on https://gist.github.com/twanvl/7453495 . The trick I used is to bound the 'depth' of terms as well as the depth of steps. A step of depth 'n' can only treat terms of depth 'n', and contain only steps of lower depth, including inside a transitive-symmetric closure. Extending this proof to the lambda calculus might be possible, but it will certainly get (even more) ugly.
This is a great proof! Unfortunately, after reading through it, I must conclude that my conjecture in section 7 was wrong; a proof for the simplistic system does not lead to a proof for the lambda calculus. :-( Defining a depth on terms is a clever idea, but I can think of no corresponding way to define 555 "depth" on lambda-expressions.
I introduced the simple system because it (1) has the same local confluence property as subtyping, and (2) is not strongly normalizing. But in my rush to simplify, I did not stop to think that the simple system has an additional inductive principle (depth) that was not present in the lambda-calculus.
For the untyped lambda calculus, you can do label all beta redexes in a term, with the lowest index at the end of the term. Then the reduction relation for this labelled calculus only allows beta reduction of labelled redexes. Such a reduction eliminates one redex, but can duplicate lower numbered ones. This means that it decreases the string of all redexes in lexicographic order. Hence it is strongly normalizing. So you only need to show local confluence of this relation.
Another trick is to add simple types, just for the purposes of one step. This is effectively the s
No comments:
Post a Comment