There are some cases where the compiler would reject it and other cases where it would (wrongly) infer ()
. The plan was to change the latter cases to make it infer !
instead, possibly only in the 2018 edition since it’s a breaking change, though that change got put on the back-burner at some point.
I think it would be nice if the compiler could infer a type in more of these situations. In OP’s example it should infer !
since we have !: Error
, though where there are other trait bounds a good mechanism might be to have “default” types for traits, eg. have a new kind of declaration which looks like this:
default<T> IntoIterator<Item = T> + Extend<T> + Default = Vec<T>;
This says that whenever there’s an inferred type which needs to satisfy the bound IntoIterator<Item = T> + Extend<T> + Default
then the compiler should infer the type Vec<T>
.
Edit: Elaborating on that idea a bit:
There would be a base declaration of default ?Sized = !;
built into the
language. Crates can then add their own default SomeTrait = SomeType;
declarations where coherence rules apply to SomeTrait
(eg. SomeTrait
must
be defined in the same crate, or at least part of it must be in the case of
SomeTrait + SomeOtherTrait
). Default type declarations can override other
default type declarations by being more specific, similar to how impl
specialization works, and the compiler will always pick the most specific
declaration which satisfies the required bounds. For example, we could have the
declarations:
default TraitA = A;
default TraitA + TraitB = B;
Then if the compiler needs to pick a type which satisfies TraitA + TraitB + TraitC
, it will first check if B: TraitA + TraitB + TraitC
, then if A: TraitA + TraitB + TraitC
, then
if !: TraitA + TraitB + TraitC
before failing if none of them fit. The
compiler should always mark these inferred types as being inferred in case they
ever appear in error messages, and should be able to point the user to the rule
that was used to infer the type.
Edit 2: In case anyone needs a motivating example for this: !
can’t
implement Future
without picking some specific Output
type. This means it
can’t be used as the inferred type where we have a trait bound of
Future<Output = SomeOtherConcreteType>
. We could, however, have this instead:
enum NeverOutput<T> {}
impl<T> Future for NeverOutput<T> {
type Output = T;
...
}
default<T> Future<Output = T> = NeverOutput<T>;
Then we would be allowed to write:
fn foo() -> impl Future<Output = String> {
unimplemented!()
}