It’s well known in Rust that there are some composition issues with enums. Specifically, it’s impossible to recover the semantics of match
within closures, even ignoring things like if guards, ability to assign to uninitialized variables, and flexibility w.r.t reference type and ownership–the borrow checker can’t determine that only one of the clauses passed to another function will be called, and therefore can’t prove that it’s safe for them to grab non-shared references or take ownership of the same resources (except for Copy
types). This is because in Rust, distinct arguments implicitly use separating conjunction, which means that only persistent (Copy
) types can be shared between them.
Fortunately, it occurs to me that linear logic already has a solution to this problem; it differentiates between separating and normal conjunction. Normal conjunction works like most languages do, in the sense that it lets you pass two arguments even if they share resources! The magic is in the rule for eliminating a normal conjunction: eliminating P ∧ Q
with a pattern [ipat1 ipat2]
requires one of the following to be true (stolen from the Iris documentation):
- Either the proposition
P
orQ
should be persistent. - Either
ipat1
oripat2
should be_
, which results in one of the conjuncts to be thrown away.
The reason this is sound (my intuition for it, anyway) is that a conjunction says that both types are inhabited in the given frame, but because we live in a resourceful world they are not necessarily both true disjointly. . Therefore, if we actually use the resources from one side of the conjunction, we may invalidate the other half in the process.
This sort of reasoning is tremendously restrictive in a language with aliasing and linear (okay, affine) resources, which is why most languages either always use normal conjunction (and only have persistent resources) or pretty much always use separating conjunction. However, normal conjunction gives us precisely the right kind of reasoning for representing a function signature that takes match clauses! For example, here is a possible way to describe a function for matching on lists using normal conjunction (syntax is obviously just an example):
enum List<T> {
Nil,
Cons(T, Box<List<T>>),
}
impl<T> List<T> {
fn case_mut<'a, FCons, FNil, FConj, U>(&'a mut self, clauses: FConj) -> U
where
FNil: FnOnce() -> U,
FCons: FnOnce(&'a mut T, &'a mut List<T>) -> U,
FConj: FNil & FCons,
{
match *self {
List::Nil => {
// Borrow checker enforces that since clauses.0 is moved,
// clauses.1 cannot be, and considers clauses consumed--
// similarly, if clauses.0 were taken by mutable reference,
// clauses.1 would be considered consumed.
(clauses.0)()
},
List::Cons(ref mut hd, ref mut tl) => {
// Same as above, but with clauses.1 swapped with clauses.0.
(clauses.1)(hd, tl)
},
}
}
}
fn main() {
let mut l = List::Cons("Hello".to_string(),
Box::new(List::Cons("World".into(),
Box::new(List::Nil))));
let not_copyable = "!".to_string();
l.case_mut((
move || {
// Moving out of a resource!
let _x = not_copyable;
},
move |_, _| {
// Moving out of the *same* resource (this is what would throw
// the borrow checker in modern Rust).
let _x = not_copyable;
},
));
}
If you change the above code to use a pair of two generics with the appropriate FnOnce bounds instead of the FConj
generic, it will fail because the second move is disallowed (and if then you comment out one of the moves Rust will happily compile it today). With a few tweaks you can get the equivalent issue with FnMut
(the associated problem is borrowing not_copyable mutably in both closures).
The rules, as outlined above, are pretty straightforward (you hold onto the conjunction of the resources used by either side, and in order to actually use either half you have to choose just one of them for any given lifetime or one of them has to be Copy
), and they fix a somewhat embarrassing compositionality problem in Rust today. I also find it encouraging that they come up naturally in linear logic, which means they’re not just an ad hoc solution to this match issue.
However, I’m not sure at all about syntax for them. In particular, in order to construct a conjunction that actually shared resources, both halves would have to “know” they were inside a conjunction for the borrow checker to allow them through. From one perspective, this isn’t that weird–it’s how match works, after all–but it feels strange to me to enact this kind of restriction when it comes to actually creating a type (though maybe constructing references sort of counts?).
I also don’t know how useful conjunctions actually are outside of replicating match syntax in closures. I suppose they could also be used for passing two mutable references into the same data structure such that only one of them could be used, but this must be severely restricted since a unique reference can change a structure in a way that destroys other references into it, even if they aren’t being used at the same time (though maybe this wouldn’t be an issue if you weren’t allowed to operate on the unique reference before putting it in the conjunction–so we might be able to loosen the requirements for FnMut
to “only one side may be consumed for the duration of the borrow” or something, which might make it more useful). It would also not be generally legal to take a reference into one of the two halves of the conjunction, since owned values would be shared between the two halves. And you wouldn’t really be able to operate on duplicated resources like this, just move them; operating on them would require eliminating the conjunction. So again it’s hard (for me anyway) to see how to make this useful for things that aren’t closures.
The fact that the syntax actually generates a type with a trait bound, instead of just being a type, is a bit weird, but it seems necessary: it needs to be able to customize drop glue and restrict the types on either side ( in order to ensure that any values that are moved into both halves are used exactly once), which implies you’re generating a fresh type for every conjunction.
The constraint on types is similar to what is needed for PinProjections
(the types at the top level of a conjunction can’t require a destructor), except that even PinDrop
isn’t okay (think of a destructor like Vec
's–if Vec
doesn’t implement any way to get a reference to a !Unpin
element unless the vector is frozen, or is unconditionally Unpin
, it’s fine for it to implement PinDrop
, but it is definitely not okay to run the destructor twice for the same Vec
!).
Fortunately, just like PinDrop
, only the top level types would have this restriction–custom drop glue could be created per conjunction, and any value that was moved into by both halves of the conjunction would run its drop glue only once (also, note that a pinned !Unpin
value couldn’t move even once, let alone twice, and it would not be safe to project a pin of a conjunction to a pin of either half of the conjunction, so pinned types would still be safe under this scheme). Unfortunately, there’s no easy way to decide whether a top level type doesn’t implement Drop
(in a generic bound), which would be needed information if you needed to safely construct a &
type. I believe that this would even be necessary for function types (given that it is possibly to manually implement destructors for something implementing FnOnce
). This is a bit ergonomically disappointing. Since "does the top level type implement Drop
" is also useful for pin projections, this might inform the direction of the pinning RFC.
Another alternative might be to just assert that Drop
is only run on one of the two halves of the &
(leaving which unspecified) and then drop any unshared fields. That gets rid of the restrictions on Drop
but doesn’t change the fact that you need fresh drop glue per instance (since you have to track which values were moved twice to make sure they only get dropped once). That sounds less than fun, and while it might be safe it is not going to give the desired behavior (presumably if you had U & Vec<T>
, you’d rather the Rust compiler not let you do it than have the Vec
's destructor not called!).
Increasingly, I like the idea of getting rid of the Drop
requirements in another way: instead of making U & T
the type, you could make &
a combinator on traits, so that all you’d have is a single type (that was moved into by both sides) that implements FnOnce() -> U & FnOnce(&'a mut T, &'a mut List<T>) -> U
or something. That would almost make &
the same as the existing +
, though, so if we were to go that route it might be better to figure out some other way of doing it, e.g. implementing a different FnOnce
for two associated types of the closure and having it provide functions for converting self
into either of them. This still wouldn’t eliminate the need for special syntactic considerations on the part of the borrow checker, since it would still have to unify any shared fields into a single structure and generate quite different trait implementations. This would also pretty much limit the utility of this sort of reasoning to closures–but it seems hard to apply to anything else, so that may be okay.
So: this isn’t exactly a proposal. And it may in fact have already been suggested. But I thought it was an interesting thought experiment, and I’m curious about whether there would be any interest in such a feature.