The type system. In a very fundamental way. Currently it's always certain that f32 values are compatible with f32 values. They're the same type. You know everything about this type just by its name.
With dependent types it's no longer possible to tell whether f32 value can be passed to a function or assigned to a variable that accepts f32. Now you have to scan all the code paths and all the function bodies that may possibly interact with the two instances you evaluate to learn their actual hidden type inferred from code.
Consider the Ord-compatible Foo:
impl Foo {
pub new(field: f64) -> Foo {
assert!(!field.is_nan());
Foo{field}
}
}
Now if the compiler sees:
impl Foo {
pub fn set(&mut self, value: f64) {
self.field = value;
}
}
Is this code valid? What should happen?
It may say "Error! Can't assign f64::with_NaN to self.field which has the type f64::without_NaN". It could be interpreted as if the field type was different than regular f64.
Or it could accept this definition, but interpret it as equivalent of pub fn set<F>(value: F) where F: f64 + Ord, and allow foo.set(1.0), but err on foo.set(1./0.), or require assert!(!x.is_nan()); foo.set(x).
In both cases you have new features in the type system - either new types or new where clauses that don't use existing type syntax, can't be named other than by writing assert!. I'm not judging whether that would be good or bad, but it is certainly a big departure from how types work in Rust currently.