So as part of the effort to “spike” a chalk-based solver, I’ve been investigating how to integrate Chalk’s SLG solver into Rust’s inference contexts. I’ve now opened up a PR on the Chalk repo that takes a particular approach. This post thus has a few parts:
- Describe what I did so far, which I think will work for now
- Open up the floor to discuss the best solution longer term
What the PR does
(This text is copied from the PR description and lightly edited)
The PR shifts the way that the SLG trait works to support “transient” inference contexts. In the older code, each strand in the SLG solver had an associated inference context that it would carry with it. When we created new strands, we would fork this inference context (using clone). (My intention was to move eventually to a persistent data structure for this, although it’s not obvious that this would be faster.)
However, the approach of storing an inference context per stand was pretty incompatible with how rustc’s inference contexts work. They are always bound to some closure, so you can’t store them in the heap, return, and then come back and use them later. The reason for this is that rustc creates an arena per inference context to store the temporary types, and then throws this arena away when you return from the closure.
Originally, I hoped to address this by changing Rust’s API to use a “Rental-like” interface. But this proved more challenging than I anticipated. I made some progress, but I kept hitting annoying edge cases in the trait system: notably, the lack of generic associated types and the shortcomings of normalization under binders. Ironically, two of the bugs I most hope to fix via this move to Chalk! It seemed to me that this route was not going anywhere.
So this PR takes a different tack. The SLG solver trait now never gives ownership of an inference context away; instead, it invokes a closure with a &mut dyn InferenceTable dyn-trait. This means that the callee can only use that inference table during the closure and cannot allow it to “escape” onto the heap (the anonymous lifetime ensures that).
The solver in turn then distinguishes between strands that are “at rest”, waiting in some table, and the current strand. A strand-at-rest is called a CanonicalStrand, and it is stored in canonical form. When we pick up a strand to run with it, we create a fresh inference context, instantiate its variables, and then start using it.
Longer term questions
As implemented, that PR is probably somewhat inefficient. We have to do a lot of substitution on a pretty regular basis; also, in rustc, creating inference contexts is not especially lightweight. I think though – for the moment – that’s fine: I’m mostly interested in pushing through until we have something that works. But it’s worth thinking over how we want things to work longer term.
In general, the idea of creating “transient” inference contexts but storing things in canonical form may have some merit. Note for example the final commit in the PR (“pointless micro-optimization”), which shows that when you create an inference context, we don’t have to fold or substitute the canonical value, because we already know that each variable will be mapped to itself. This is pretty tied to Chalk’s IR, which uses the same variable (“variable”) to represent bound and free variables (via debruijn indexing), and also uses the same indices to represent types and lifetimes. But this turns out to be a nice IR choice, and we could adopt it in rustc.
However, rustc adds another consideration, which is memory use. Right now, we have this scheme that allows each function to throw its types away after we are done type-checking it. I seem to recall this was a pretty decent win for memory use, but it’s not especially compatible with this “suspend work” scheme. (There is also a certain amount of tension here around parallel threads and the need to canonicalize types.)
As one final twist, with NLL, some aspects of rustc’s current workings are changing. For example, the main rust type checker will hopefully be completely erasing regions by the time we are done, which means that one major source of memory use (tons and tons of region variables) would be deferred (though we’ll still create lots of type variables). Those variables would instead by created during NLL region checking, during which there is no need for type variables! So maybe we can treat region and type variables separately.
I could imagine a few directions:
- Keep the split established by the PR: use canonical form to store things across inference sessions, and push hard on slimming down the inference context and doing other optimizations.
- Note though that there is some tension here: e.g., in Chalk, we are able to use “just a clone” because we use the same variant to represent an inference variable as a canonical value. But in rustc we use different variants so we can tell which types should go into the global arena and which are local – so we’d have to either substitute, or potentially to understand that type variables may be represented by other variant (that last idea is actually probably fairly easy, and maybe the way to go).
- The most extreme would be to remove “local” inference contexts from rustc and go back to “all things are global”. The downside of this is that we create a lot of transient types that live forever in global arenas.
- It also forces more interaction between threads, but we could probably address that in other ways.
- An alternative would be for trait solving to represent variables in a different way than the “main” type checker.
- Trait variables would be globally allocated, so that they live for the
'gcx
lifetime.
- Trait variables would be globally allocated, so that they live for the
- Or have two different inference contexts, one for use during trait solving and one during type-checking.
- This sounds sorta terrible but I’ve been hoping that more-and-more we can move towards the trait solver being its own “virtual machine”. Maybe it can take ownership of inference contexts and memory use more completely, and the type checker can be its client?
Having written these out, I sort of suspect that this is the kind of discussion that we won’t solve in an internals thread, and that we need instead to “pursue the code” and see where it takes us. But I’m going to post anyway, just to get the creative juices flowing. =) Maybe we’ll hit on some idea that proves useful later.