Remix.run Logo
cjfd 5 hours ago

One way that is very common to have decidable dependent types and avoid the paradox is to have a type hierarchy. I.e, there is not just one star but a countable series of them *_1, *_2, *_3, .... and the rule then becomes that *_i is of type *_(i+1) and that if in forall A, B A is of type *_i and B is of type *_j, forall A, B is of type type *_(max(i, j) + 1).

SkySkimmer 3 hours ago | parent | next [-]

>if in forall A, B A is of type _i and B is of type _j, forall A, B is of type type *_(max(i, j) + 1).

Minor correction: no +1 in forall

anon291 39 minutes ago | parent | prev | next [-]

This is correct but just delays the problem. It is still impossible to type level-generic functions (i.e. functions that work for all type levels).

The basic fundamental reality that no type theory has offered is an ability to type everything

khaledh 4 hours ago | parent | prev | next [-]

I'm no expert myself, but is this the same as Russell's type hierarchy theory? This is from a quick Google AI search answer:

    Bertrand Russell developed type theory to avoid the paradoxes, like his own, that arose from naive set theory, which arose from the unrestricted use of predicates and collections. His solution, outlined in the 1908 article "Mathematical logic as based on the theory of types" and later expanded in Principia Mathematica (1910–1913), created a hierarchy of types to prevent self-referential paradoxes by ensuring that an entity could not be defined in terms of itself. He proposed a system where variables have specific types, and entities of a given type can only be built from entities of a lower type.
IshKebab 4 hours ago | parent | prev [-]

Ah is that what Lean does with its type universes?