▲ | cubefox 3 days ago | |||||||
Where do you need full type unification rather only type pattern matching? | ||||||||
▲ | housecarpenter 3 days ago | parent | next [-] | |||||||
Consider the identity function f, which just takes an argument and returns it unchanged, and has the polymorphic type a -> a, where a is a type variable. What's the type of f(f)? Obviously, since f(f) = f it should be a -> a as well. But to infer it without actually evaluating it, using the standard Hindley-Milner algorithm, we reason as follows: the two f's can have different specific types, so we introduce a new type variable b. The type of the first f will be a substitution instance of a -> a, the type of the second f will be a substitution instance of b -> b. We introduce a new type variable c for the return type, and solve the equation (a -> a) = ((b -> b) -> c), using unification. This gives us the substitution {a = (b -> b), c = (b -> b)}, and so we see that the return type c is b -> b. But if we use pattern matching rather than unification, the variables in one of the two sides of the equation (a -> a) = ((b -> b) -> c) are effectively treated as constants referring to atomic types, not variables. Now if we treat the variables on the left side as constants, i.e. we treat a as a constant, we have to match b -> b to the constant a, which is impossible; the type a is atomic, the type b -> b isn't. If we treat the variables on the right side as constants, i.e. we treat b and c as constants, then we have to match a to both b -> b and c, and this means our substitution will have to make b -> b and c equal, which is impossible given that c is an atomic type and b -> b isn't. | ||||||||
▲ | guerrilla 3 days ago | parent | prev [-] | |||||||
I think in dependently typed programming languages and theorem proves. As I understand it, even System F (like Haskell) benefits, no? | ||||||||
|