Never types and inference

A ! argument is a contradiction in the runtime semantics, meaning anything can happen if the function is called.

Allowing inconsistent associated types makes it a contradiction in the type system… meaning anything can happen if the compiler merely tries to type-check it!

5 Likes

An ! argument is a contradiction at compile time, because you cannot obtain a ! value in the first place to call the function with. Merely obtaining a ! is undefined behaviour. The compiler doesn't have to emit the function body at all; it's dead code. And all dead code is equivalent.

There is no contradiction in different impls having different associated types. The proposal is to allow ! to have multiple impls of a single trait in certain conditions; namely, for 'purely negative' traits I mentioned. Impls of such traits for ! can only differ in their choice of associated types, and are otherwise equivalent. Choosing a specific impl is therefore a mere matter of type inference, no different from choosing a specific instance of a generic type.

Of course this would require re-working some fundamentals. For one thing, instantiating a generic with a trait bound would require establishing an identity for the impl and passing it along with the type; the compiler can no longer 're-derive' the impl each time the type is used. But it's not a priori inconsistent.

Also, general impl-relevance has been requested before:

@canndrew's proposal leaves vacuous impls for ! anonymous, because type inference is sufficient to distinguish between them – at least in the motivating cases. I can imagine some (admittedly rather contrived) cases where it wouldn't be:

trait Foo {
    type Iter: Iterator;
}

struct Bar;

impl Foo for Bar {
    type Iter = !;
}

What is <Bar as Foo>::Iter::Item? A specific impl of Iterator for ! would have to be chosen by the time Foo is implemented for Bar, but there is no syntax to disambiguate it.

1 Like

We seem to have very different ideas of what a "contradiction at compile time" means. To me, it means an ICE (or worse, a miscompile of reachable code). It is to suggest that the type checker must never try to compute one of these associated type projections.


...but this is just arguing over definitions. Clearly, you agree that it is not absurd for the type checker to compute these associated types, so I shall focus on your real argument, which, if I understand it, is as follows:

  • Suppose that all methods of Trait take a receiver. It may have type parameters and/or lifetime parameters.
    • In fact, without loss of generality, let's assume it has no methods, period. (Discussing the semantics and equivalence of unreachable code is nothing more than a distraction right now, and the reason I misinterpreted your argument in the first place).
  • All associated type projections of the form <! as Trait>::Assoc, when computed, shall produce a fresh type inference variable.
  • There may be (small) extensions to the type system necessary to support it properly.
    • In particular, each trait bound T: Trait needs to be internally tagged in some manner to ensure that all occurrences of <T as Trait>::Assoc produce the same inference variable.
  • Once all of these features are in place, it will not introduce unsoundness.

This is what I disagree with.

However, given that a proof of unsoundness (i.e. a counterexample) is several orders of magnitude easier to construct than a proof of soundness (i.e. a complete formalization), the burden of proof currently lies on me. I will be thinking about this.

3 Likes

One problem is that traits that don’t themselves cause paradoxes, like:

trait Innocent {
    type Assoc;
}

…can be referenced by other things that do, such as this:

struct Iffy<T: Innocent> {
    val: <T as Innocent>::Assoc
}

Or, if we change the trait slightly:

trait Funky {
    fn funk(&self);
}
trait Innocent { // still looks innocent
    type Assoc: Default + Funky;
}

…this:

pub fn foo<T: Innocent>() {
    let x: T::Assoc = Default::default();
    x.funk(); // what should this do?
}

You could say, “just don’t allow Iffy<!> or foo<!> to be instantiated”. But it doesn’t have to be a concrete instantiation; it could be buried inside a number of other instantiations.

Well, those examples don’t necessarily show unsoundness, just places the compiler could get stuck and perhaps produce instantiation-time errors. But how about this, using the typemap crate:

// crate A:
trait Innocent: Any {
    type Assoc: Any;
}

pub struct MyKey<T>(T);
impl<T: Innocent> typemap::Key for MyKey<T> {
    type Value = <T as Innocent>::Assoc;
}

// crate B (depends on A):
pub fn set(map: &mut typemap::TypeMap, val: u32) {
    map.insert::<A::MyKey<!>>(val);
}

// crate C (depends on A):
pub fn get(map: &typemap::TypeMap) -> &String {
    map.get::<A::MyKey<!>>().unwrap()
}

// crate D (depends on B, C):
fn huh() {
    let map = typemap::TypeMap::new();
    B::set(&mut map, 42);
    let s: &String = C::get(&map); // ???
}

Finally, there’s an RFC to allow impls to be proven disjoint if they have bounds with disjoint associated types, i.e.

impl<T> Foo for T where T: Innocent<Assoc=i32> {
    // some impl
}
impl<T> Foo for T where T: Innocent<Assoc=i64> {
    // some other impl
}

That RFC was postponed rather than accepted, but only until chalk becomes a thing; it’s definitely a desirable feature to have eventually. But it’s broken if both impls can be selected for T = !.

I believe there actually is a form of reasoning that would soundly allow incoherent impls in general, including the implicit ones you’re proposing. There’s a reason I had to reach out to something weird like typemap to get unsoundness. But… it would be a pretty huge change and would have all sorts of limitations. (I could also just be wrong.) It might be interesting to explore someday in order to allow ‘local impls’… but not just for this.

3 Likes

I know it’s got lost a bit, but I feel like my original question has been answered well: in the kinds of situations I show, the compiler will assume the never type in the future. It’s just then a question of which traits implement it, either all traits where its coherent (e.g. not Default), or only those where it is implemented manually.

I’m curious if there’s an appropriate notion of a “never-safe” trait, i.e. a trait for which we can always assume !: Trait a priori. For instance, I’d expect a trait consisting of only methods that take self to be never-safe, since such methods are dead code and can only be correctly implemented as panics.

typemap is a pretty good example of a thing that we currently expect should be correct which would no longer be valid after this change, but because it uses unsafe code, one could shift the blame and say that typemap is retroactively no longer sound. I think that would be a great loss for type-level programming (and would completely wreck e.g. frunk's plans to replace the empty coproduct CNil with !)... but still, to convince some of the tougher folk, it'd be nice to have an example that is purely safe code.

However, I have thus far struggled to come up with one.

Really, I feel like we are beyond this right now. Forget methods, there are far more interesting questions to ask about things like associated types, trait solving and type inference.

Based on my reading of the thread, I find it hard to believe you can sanely automatically implement traits with associated types or constants for !, at all. Being able to return unimplemented!() for RPIT methods returning "pure virtual" traits would be a good start, anyways.

Hello, maybe i’m very wrong, but it seems to me that maybe there shouldn’t be a single “never type”. Instead, i think ! should be treated as syntax sugar for some special existential type, whose bounds can never be satisfied. It has some additional coersion rules that allow it to pick up additional bounds(traits, etc), so unimplemented!() return some anonymous type that satisfies impl Never and impl (Never + Iterator<Item=u8>) and impl (Never + Iterator<Item=char>) etc.

The typemap example is very good. After thinking for a while, I don’t think TypeMap would be possible to implement at all in a fully impl-relevant setting. (You could just impl typemap::Key for u32 twice and alternate between impls when accessing the map.) Unless the map was somehow accessed not just by type, but by its associated impl of typemap::Key.

I concede my point. Multiple impls per trait are not workable, even in this limited form.

1 Like

Part of the point of having a single canonical ! / “never” type is so that we can do things like allow a value of type Result<T, !> provide access to the T and statically know there cannot be a panic. In particular, I believe we want to write specializations that explicitly mention !.

Plus, “multiple never types” is pretty close to the status quo, since anyone can (and many already do) write a zero-variant enum to get similar behavior. My understanding is that “we” (the community I mean) accepted an RFC for an official never type in the first place precisely because “we” agreed that having only one never type instead of several is an improvement.

This idea is interesting. Notably, I think it would fix the typemap problem (though in a way that is perhaps surprising), because ! would not actually implement Innocent. In this model, trait impls can only be added to ! at coercion sites, meaning that it must exist in a function body (and thus counter-examples can no longer be constructed purely through type-level programming).

There should, of course, be a restriction against the same impl Never type having multiple conflicting trait impls added to it.

I feel like there must be demons lurking somewhere beneath the surface, waiting to be found… but at the moment, it sounds quite promising.

Note: A downside is that I think manual impls of traits for ! will become impossible. (But this seems to be true for many of our alternatives)

Why not an Uninhabited marker trait, then?

Hm, from what I can tell this simply hasn't been proposed before. Probably in part because no one even proposed making ! implement "all traits" before. There was an RFC for making it implement only the traits that had a single obvious, trivial implementation but that got postponed until we gained more experience with ! in general:

aturon commented on 8 Jun 2016
I also want to raise a specific area of nervousness: we've talked about possible extensions to the trait system like negative bounds or mutually-exclusive traits, and the latter in particular would need to be done carefully if we also had auto-implementation. I also feel uneasy about the fact that this would only apply to some/most traits -- it feels like nontrivial complexity for something that will probably be pretty obscure in practice.

More broadly, though, I'm inclined to reach a decision on #1216, get some experience using it, and then determine from that experience how well-motivated this RFC is. I think it'd be premature to reach a decision on this RFC until then. (And on that note, think we should postpone this RFC for the time being.)

withoutboats commented on 8 Jun 2016
@aturon's point about negative bounds seems particularly apt. Why should !: Hasher but not !: !Hasher ? There's also only one unique impl of !Hasher for ! (indeed, there's only one unique negative impl of any trait for any type). Regardless of whether or not the complexity is justified, this proposal just seems like incorrect behavior in light of these sort of extensions.

Turning the never type into a trait over a family of never types seems like it'd also be very risky and perhaps simply incorrect to do before we sort out all these other trait system extensions we're interested in.


The good news is that this is all irrelevant to the specific case that originally started this thread: making an impl Error type parameter default to ! when no value for it is ever provided, and Error is one of many traits that has a single, obvious implementation for ! that doesn't raise any of these concerns about type system soundness/sanity/extensibility.

1 Like

Ah, yeah, impl relevance seems equivalent to the “form of reasoning” I was thinking of. Another thing it would interfere with, AFAIK, is specialization. If you have:

impl<T> Foo for T {
    default fn foo() {}
}

impl<T: Bar> Foo for (T, T) {
    fn foo() { ... }
}

…and you end up passing, say, (u32, u32), but where the two u32s are tagged with different impls of Bar, then the specialized impl would have to not match. This wouldn’t be inherently unsound, but it would certainly be surprising.

1 Like

I believe you can implement typemap in safe code based on the standard library Any trait and downcast_ref. In fact, the actual implementation is based on a similar trait it defines called UnsafeAny. From a quick look at the source, it seems like it would be pretty straightforward to switch to regular Any; it would just come at the cost of adding redundant runtime checks.

The implementation of downcast_ref itself contains unsafe code, but it's stable functionality in the standard library.

Precise type equality is already unsound in the presence of specialization because of lifetimes (we can't soundly have &'a u32 be equal to itself) - we already have to find some sort of workaround, which will probably be to have a system where specialization can't depend on these sorts of equalities.

Actually, I don't think this is as unsound as it seems -- if we wanted to go into this rabbit hole, we could make ! behave sort of like a mix of impl Trait and generators - have each coercion site with a defaulted ! create a fresh uninhabited opaque type, and assign values to various required projections off it.

This would also "solve" the typemap problem, because different instantiations of that opaque type will have a different type_id.

But I think all the ways of making this work are fairly complicated, so I don't quite want to go there (@nikomatsakis please don't kill me for suggesting this).

The stupidest way of making it would be to make the "literal" ! be impl Uninhabited. This would make working with ! in structs super annoying - you couldn't put an Vec<Option<!>> value into a Vec<Option<!>> struct field, because the ! in them will be different. After all, the Vec could have theoretically stored some sort of associated type inside it.

A more complicated, but less sanity-mangling, way would be to just have uninhabited type variables default to some sort of "flexible bottom type", and have typeck somehow "declare" all projections off that type. This would just leave you with the problems of default type variables in inference, which is mostly that they make inference behave unpredictably in all sorts of edge cases.

1 Like

In simpler words, it is very easy to have “uninhabited impl trait expressions” in the sense of RFC 2604 if the user specifies the desired signature, and the only problem that remains is to make type inference work out sanely without the user specifying out the signature. As always with type inference examples, the insanity occurs in edge cases.

Dealing with “pseudo-decidable” type equality by not having the types be pd-equal to themselves…

That idea I think works; this seems to me more akin to typed holes (which would exist for the express and sole purpose of facilitating development) and it should have some other surface syntax than ! (which should always be the same time).

How are typed holes related to this? Do you have some other meaning for "typed holes" other than forall<T> T that dumps compiler diagnostics based on what T ends up being?