I’m by no means an expert on the type theory of dependent types…
But… Dependently typed languages have some varying rules for unification…
For example, to unify Palindrome (makeList x y z) and Palindrome [1, 2, 3] the compiler could run (by reductions / normalization) both sides and determine that they are structurally equal (think #[structural_match]) (see definitional equality, http://adam.chlipala.net/cpdt/html/Equality.html).
To ensure that such unification is sound and is erasable, many dependently typed languages will require such type level expressions to be both pure and total (they can’t diverge by any mechanism such as panics or infinite loops). A notable exception is Dependent Haskell (which hasn’t been fully implemented yet, but you can read about it in Richard Eisenberg’s thesis, https://arxiv.org/abs/1610.07978)
Where the compiler can’t determine that two types are equal by itself since the languages type theory of definitional equality is , you usually have some library-encodable mechanism for propositional equality of types by which you can massage proofs yourself:
data a :~: b :: k -> k -> * where
Refl :: a :~: a
http://hackage.haskell.org/package/base-4.9.1.0/docs/Data-Type-Equality.html
data (=) : a -> b -> Type where
Refl : x = x
http://docs.idris-lang.org/en/latest/proofs/pluscomm.html
data _≡_ {a} {A : Set a} (x : A) : A → Set a where
refl : x ≡ x
http://agda.readthedocs.io/en/v2.5.2/language/built-ins.html#equality
Some more links:
Now, you can also add in more axioms into the language such as the commutativity of addition for usize. This all depends on what primitive types exist in the language and so on. Typically, the more axioms a type theory has, the more it can do for you and the less you will need to prove yourself.
I don’t agree; Using PhantomData pervasively for type level programming typically leads to encoding of monstrosities such as https://hackage.haskell.org/package/singletons-2.2 (N.B. Eisenberg who authored this package agrees that this is a bad approach and that it is a hack).
They don’t have to. As previously mentioned, most dependently typed languages ensure termination and thus you can’t fail (diverge) at run-time. Now, given user input, most programs will have some notion of “failure” such as when an incorrect input is given by the user, but this is normal program behavior and not logic bugs.
For the same reason as I’ve noted above this does not seem true.
That does not necessarily entail that type checking must wait until the actual runtime values are known; for example, if you have two abstract values x: usize and y: usize, then the type checker could unify x + y and y + x even tho the value of x and y are not known at run-time.
If the user has no recourse to prove this manually and the type system can’t do this for us automatically, then such a system does not seem particularly useful.
Refinement types are a type system feature with which you can refine existing base types by a predicate. For example, you may state that let foo : {x: usize | even(x)} = make_even_number(); and then the type of foo is all even usizes. In contrast, dependent types are usually built up by-construction.
See:
Thinking a bit more on the topic of monomorphization, it seems to me that what we really want to avoid is a situation such as:
data Bool = True | False
-- These types have different memory representations...
data TypeA = ...
data TypeB = ...
pickType : Bool -> Type
pickType True = TypeA
pickType False = TypeB
We can get the same situation with:
// These types have different memory representations...
struct TypeA(Vec<&'static str>);
struct TypeB;
trait Func { type Output; }
struct Input<let cond: bool> { ... }
impl Func for Input<{true}> { type Output = TypeA; }
impl Func for Input<{false}> { type Output = TypeB; }
fn pick_type(cond: bool) -> <Input<{cond}> as Func>::Output {
match cond {
true => TypeA(vec!["foo", "bar", ..]),
false => TypeB,
}
}
It seems to me that we want to prevent the pattern matching on types that is occuring at the implementations of Func for Input<{x}>. Two ways come to mind to prevent such pattern matching:
- Ensure that
x is polymorphic in the input and that you can’t specialize on let in impls.
- Ensure that
let parameters and associated types can’t be combined in implementations.
I think that if you ensure either 1. or 2. you will avoid having runtime values affect the memory layout of types.