Never types and inference

Take the function

fn run() -> Result<(), impl Error> {
    Ok(())
}

where the error type is unbounded. The compiler currently rejects this. When we have the never type I think this should pass - as long as there is impl Error for !, we can monomorphise this to fn run() -> Result<(), !>. Will this work like this?

Sorry for the poorly worded question.

I think never_type used to do something like this, inferring never-set inference variables to !, but that change got dropped because it was a breaking change in some cases where it used to give () instead. I can’t find the issue about that right now, though…

There are some cases where the compiler would reject it and other cases where it would (wrongly) infer (). The plan was to change the latter cases to make it infer ! instead, possibly only in the 2018 edition since it’s a breaking change, though that change got put on the back-burner at some point.

I think it would be nice if the compiler could infer a type in more of these situations. In OP’s example it should infer ! since we have !: Error, though where there are other trait bounds a good mechanism might be to have “default” types for traits, eg. have a new kind of declaration which looks like this:

default<T> IntoIterator<Item = T> + Extend<T> + Default = Vec<T>;

This says that whenever there’s an inferred type which needs to satisfy the bound IntoIterator<Item = T> + Extend<T> + Default then the compiler should infer the type Vec<T>.

Edit: Elaborating on that idea a bit:

There would be a base declaration of default ?Sized = !; built into the language. Crates can then add their own default SomeTrait = SomeType; declarations where coherence rules apply to SomeTrait (eg. SomeTrait must be defined in the same crate, or at least part of it must be in the case of SomeTrait + SomeOtherTrait). Default type declarations can override other default type declarations by being more specific, similar to how impl specialization works, and the compiler will always pick the most specific declaration which satisfies the required bounds. For example, we could have the declarations:

default TraitA = A;
default TraitA + TraitB = B;

Then if the compiler needs to pick a type which satisfies TraitA + TraitB + TraitC, it will first check if B: TraitA + TraitB + TraitC, then if A: TraitA + TraitB + TraitC, then if !: TraitA + TraitB + TraitC before failing if none of them fit. The compiler should always mark these inferred types as being inferred in case they ever appear in error messages, and should be able to point the user to the rule that was used to infer the type.

Edit 2: In case anyone needs a motivating example for this: ! can’t implement Future without picking some specific Output type. This means it can’t be used as the inferred type where we have a trait bound of Future<Output = SomeOtherConcreteType>. We could, however, have this instead:

enum NeverOutput<T> {}

impl<T> Future for NeverOutput<T> {
    type Output = T;

    ...
}

default<T> Future<Output = T> = NeverOutput<T>;

Then we would be allowed to write:

fn foo() -> impl Future<Output = String> {
    unimplemented!()
}
2 Likes

It would be nice to solve the unimplemented problem one way or another.

I ran into another similar issue today while doing aoc:

fn iter(&self) -> impl Iterator<Item=Point2d> {
    unimplemented!()
}

gives

error[E0277]: `()` is not an iterator

the type should be !, not (), I think.

I don’t fully understand why solving this needs to generate random throw-away types. Why can’t it just be that ! simply satisfies any and all traits?

! cannot satisfy all traits, otherwise you could do <! as Default>::default(). The same basically goes for any other trait fn that doesn’t take self, and is in fact more important for those that also don’t return Self, as they may even have a valid nonpanicking implementation.

3 Likes

Oh, that’s right, I didn’t think it through fully…

Still, I think we should think of a better solution, just trying to assume something potentially surprising seems flaky in any case.

It is if you do

#![feature(never_type)]
fn iter() -> impl Iterator<Item=i32> {
    unimplemented!()
}

There's been a bunch of discussion about this; see Tracking issue for promoting `!` to a type (RFC 1216) · Issue #35121 · rust-lang/rust · GitHub and sub-issues.

3 Likes

!: Default isn’t the big issue if we disregard the internals of default(). A more fundamental problem is that you can’t simultaneously support

!: Iterator<Item = u8>,
!: Iterator<Item = u16>,

Allowing this breaks the assumption in the type system which the associated type is a one-to-one relationship, as mentioned in Never types and inference and also Draft RFC: default types for traits.

(Note: I don’t think default trait = type is the correct answer to this problem)

5 Likes

Well the other idea mentioned in that thread is to allow trait impls to be generic in their associate types. eg. allow impls like this:

// T not used on Future or !, just on the associated type.
impl<T> Future for ! {
    type Output = T;
    ...
}

Which would make <! as Future>::Output a type variable which can unify with anything. I don't know if this is sound though.

I don't think it is sound. If this is allowed, then we can't make any assumption about <U as Future>::Output as U == ! is also possible.

fn g<U: Future>() {
    let a: U::Output;
    let b: U::Output;
    // a and b can be of different type?
}
g::<!>();
1 Like

It's unsound.

Remember that associated types are type-level functions, wherefore the type-system may assume X ~ Y => <X as F>::Out ~ <Y as F>::Out.

Adding your idea to the type system lets us then derive:

fn transmute<T, U>(x: T) -> U {
    struct Input;
    trait Function { type Output; }
    // Your added rule:
    impl<S> Input { type Output = S; }

    // Legal because `Output ~ ?S1` so specifically `T ~ Output`:
    let tmp: <Input as Function>::Output = x;
    // Legal because `Output ~ ?S2` so specifically `Output ~ U`:
    let tmp: U = tmp;
    // And we have bricked the type-system:
    tmp
}
1 Like

@Centril: I disagree that that should type-check.

let tmp: <Input as Function>::Output = x;

On this line, <Input as Function>::Output will initially be a type variable and will then unify with the type of x to give tmp: T. However on the next line tmp gets used and will cause that same type variable to try and unify with U. So it should fail with U != T.

@kennytm

fn g<U: Future>() {
   let a: U::Output;
   let b: U::Output;
   // a and b can be of different type?
}
g::<!>();

I think the U: Future in the function’s type parameters implies that a specific U: Future impl has been passed to the function. In other words, no, there will be one type variable and a and b will both have that type.

@nikomatsakis: You probably know the most about traits and soundness, given your work on chalk. What’s your opinion on the last 6 posts in this thread?

Not niko, but I don’t see any way this could be sound. The whole reason associated types exist in the first place is to serve as type-level functions that guarantee uniqueness. Otherwise they’re exactly the same as type parameters. Allowing multiple impls of the same trait where the associated types are different would violate some very fundamental assumptions about the type system.

3 Likes

I don’t see how this should be any more unsound than allowing multiple impls per a (type, trait) pair, and subjecting the choice of impl to inference. The general case may be problematic because type inference may be insufficient to disambiguate between two different impls, but for the natural, vacuous impls for ! the compiler can exploit the fact that they are all functionally identical, so the choice doesn’t matter.

But even though I don’t think this would be necessarily unsound, I’m not sure this feature would fit Rust very well; it’s quite incongruous with the rest of the type system, where a (type, trait) pair is supposed to determine the impl unambiguously, if one exists. If we add impl relevance for traits, I’d rather have it available for all types, not just the empty type.

I am very not keen on using reasoning like “the impls are vacuous.” You’re talking as if having a ! appear as a type argument is a logical contradiction… but ! is merely an empty set of values, not an empty set of types.

2 Likes

No, having ! as a type argument is not a contradiction. But having a !-typed value as a function argument is a contradiction.

! has an empty set of possible values. Therefore, it is possible to write an implementation for any function taking a !-typed argument, and any two such implementations will vacuously have the same computational content. Implementing a trait for a type is mostly a matter of implementing the trait’s methods; thus, any trait for which all methods take a Self-typed* argument can be implemented for !, and any two such implementations will behave identically (that is to say, not behave at all, because their methods cannot be invoked in the first place). This property doesn’t hold for all traits (Default is the usual counterexample), but for many of them it does.

* or &Self-typed, &mut Self-typed, etc.

1 Like