Okay, back to this.
functions/operations/blocks
Do we need unsafe-polymorphism here?
I argue that no, we do not. An unsafe operation (or function; I’ll just say operation in the future) may carry with it an arbitrary proof obligation, defined by the operation itself. Because it’s completely unbounded obligation, there’s no way for polymorphic code to understand what this obligation means. As an example, one example I’ve heard cited, is map.
Unsafe-polymorphic map, as a method on Iterator, would be something like:
?unsafe fn map<R>(self, f: ?unsafe impl FnMut(Self::Item) -> R) -> impl Iterator<Item=R>;
But what does this mean in the unsafe case? map really sensibly call f at all: map has no idea what the guarantees are. So map's implementor has no additional information about what they can do with f than they would have otherwise. The caller is still responsible for ensuring that, based on map's documentation, f won’t be called unsafely. This is just trusted fn map, then, which I explained above I don’t think is useful.
So the caller should just write || unsafe { ... }, discharging their proof obligations in the closure. Or they can still write unsafe around the map call, since unsafe penetrates syntactically into the closure. The only possible reason I could think of we might actually have a use for unsafe polymorphism here. In fact, it’s not even clear to me that unsafe fn types are inherently meaningful and useful, because they can only be used with out-of-band information about the proof obligation their value carries. I’d think that in principle, there ought to be an unsafe unsafe fn -> fn conversion. But it can be implemented via a closure so it needs no special support.
It also doesn’t make sense for trusted impls/values to be polymorphic over unsafe, for similar reasons. The proof obligation is flipped here: a trusted impl or value is promising stronger guarantees than “never commits UB”. unsafe relaxes the guarantee to “might cause UB”. So the polymorphism here would be “if the parameter is unsafe, I am not trusted.” But the parameter might as well be nose_demons(), so that just means “if the parameter is unsafe, I promise never to call it”. This rather defeats the point of passing it as a parameter.
Incidentally, this gives me my answer: unsafe is not an effect.
types/values
I can see five ideas for trusted-polymorphism at the type/value level:
- Functions whose results are
trusted iff their inputs are.
- Applying
trusted mutations regardless of whether the value is trusted.
- Allowing a field to be marked
trusted independent of its parent type.
- Making a type’s
trusted-ness contingent on that of a field.
The first is the most straightforward, and in my view the most natural case of polymorphism (#t is an effect variable here, even though I’m still not sure this is an effect):
fn into_bar<#t : ?trusted>(b: #t Bar) -> #t Foo { ... }
This seems important to have so I’m not going to dwell on it.
Trusted mutations are, in principle, similar:
fn rotate(&mut ?trusted self) { ... }
The idea here is that I promise to only do mutations that preserve trustedness. Given trusted starting state, I promise trusted ending state (I do not need to trust that I preserve trustedness throughout, because I have the unique reference so I can break invariants in the middle, though I need to watch out for panic safety), but I can operate perfectly safely on non-trusted values too; it just may be garbage in, garbage out.
I can’t just use &mut, because then I’m not making any promise about the operations I perform. I can’t take &mut trusted, because the conversion from &mut in the caller would be unsafe. So I end up needing polymorphism. But after a lot of thought I don’t think there’s any fundamental technology difference here, other than being forced to follow the restrictions of trusted mutations without the benefit of the assumptions possibly leading to annoying code.
The third and fourth relate to fields. The third is that I might have struct Foo { b: Bar } and want to have a type denoting "Foo with trusted b". I’m not sure we really have the technology for this, but we could invent it with a compelling use case. It’s also not really polymorphism but it feels like it fit here. The fourth is marking a field in such a way that f: trusted Foo would imply that f.b: trusted Bar. Being able to do this is merely convenience; Foo could promise this as an invariant and let unsafe code anywhere do the conversion.
Aside: do we actually want trusted types/values???
Ok, I’ll admit I got kind of lost in the weeds here. I don’t think that this is a useful feature in Rust as it exists, but this is an interesting direction to think about. It doesn’t really make sense to have a data structure with only one set of invariants that are safe, because the untrusted version would be kind of… bad. Imagine !trusted str, which would basically just be a [u8] with different methods. But we don’t want to say that trusted [u8] is always UTF-8, since that makes no sense. Polymorphism may also be useful when I look at traits below.
Newtypes right now are pretty heavyweight, because you have to implement all the methods and traits that you want to inherit. If they were more lightweight, we could have something like struct DefinitelyANumber(f32) with an invariant that it isn’t NaN. Then we could have impl ?trusted Ord for ?trusted DefinitelyANumber { ... }, and that in turn would enable polymorphism over whether a list of f32s promises not to have a NaN in it.
Of course, it’s a bit unfortunate that this ends up requiring new types, because there’s no difference in the untrusted case between f32 and DefinitelyANumber, and there isn’t really such a thing as trusted f32. So perhaps a direction would be to have named invariants, so that we could do something like impl ?trusted Ord for ?trusted[DefinitelyANumber] f32. This would also allow expressing multiple invariants simultaneously, without creating an n^2 problem.
So maybe this would be useful at some point.
And that took me way longer than expected, so I’ll handle polymorphism in the last case tomorrow.