OK, I’m finally getting to this write-up.
Regular meetings:
We decided to meet at Monday at 19:00 UTC on a regular basis to check-in. If you would like to be added to the calendar invite, let me know somehow.
How to subdivide the work:
The highlight was that we broke the work down into a few parts (I’m going to mildly extend this list beyond what was discussed in the mtg, actually):
- Move existing trait selection into a query
- Specify and start implementing lowering rules
- Define connection between the Chalk SLG solver and rustc
Move existing trait selection into a query:
Right now, the trait solver kind of executes “inline” with other queries:
fn typeck_query() {
let inference_context = InferenceContext::new();
do_trait_solving(inference_context, some_goal)
}
The idea is to use the canonicalization primitives introduced in PR 48411 to break this out into a rustc query. The new setup might look something like this (some docs are in this rustc-guide PR but more are needed):
fn typeck_query() {
let inference_context = InferenceContext::new();
let (canonical_goal, values) = canonicalize(some_goal);
let canonical_result = trait_solving_query(canonical_goal);
apply_query_result(canonical_result, values);
}
fn trait_solving_query(canonical_goal) -> CanonicalResult {
let inference_context = InferenceContext::new();
do_trait_solving(inference_context, some_goal);
canonicalize(result)
}
This new setup enables several things:
- the memoized rustc queries integrate with incremental and provide caching
- (the SLG solver also caches internally though)
- canonicalization lets us more readily cache across functions and more broadly
- the queries provide an abstraction barrier behind which we can tinker with the trait implementation as much as we like
- my eventual goal: move as much out into the chalk crate as we can, behind a strong VM-like abstraction barrier
First steps:
- Land #48411
- Converting
evaluate_predicate
into a query is a great first step (#48536)
Ownership and goals:
@nikomatsakis will try to create more issues in this area before next meeting.
Specify and start implementing lowering rules
A big part of chalk has been defining the rules for impls and things in a more abstract fashion. Right now that code is kind of messy and lives in Chalk’s lower module.
My rustc-guide PR includes a preliminary write-up of these rules as well.
One question that was raised here: can we abstract the code for this lowering out, so that it can be shared between rustc and chalk? Currently the Chalk code is sort of messy anyway. Overall, this is a bit unclear, as it has a lot to do with the specifics of the various data structures used in the compiler/chalk, but maybe it can be done.
Some possible work to do:
- Refactoring the structure of
ty::Predicate
(#48539)
- Other refactorings? We need to think a bit about what data structures are needed in rustc, that should enable us to break out specific work items
- Experiment with extacting Chalk’s lowering rules into a separate crate that is generic about its inputs
- or reimplementing from scratch!
Define connection between the Chalk SLG solver and rustc
The Chalk’s SLG solver is now extracted into its own crate, chalk-slg. This crate is defined with respect to a generic interface defined by traits. This generic interface needs more comments (story of my life). The hope is that we can share this solver code between chalk and rustc, just with two different instantiations of that trait.
There is some work to be done in making this happen:
- We need a “rental-like” version of
InferCtxt
(maybe rental
itself can be used, though from quick inspection that was not obvious). I will try to write-up an issue about this shortly, it might be a good starting issue.
- The problem is that the inference context today is always tied to a closure, but we will want to be able to create an inference context, do work in it, and then return – then come back and do more work. This is because the SLG solver is kind of an “async” style.
- Comment the interface more thoroughly, perhaps simplify it (e.g., there are empty traits that could in some cases be dropped).
- We can then start defining the various types and operations needed by the interface and see how it maps.
How to manage the code:
As usual, we want to do our development “on master”, kind of co-existing with the existing trait system. To that end, we will probably want to introduce a flag (-Zchalk
) that enables the new trait system implementation. The expectation is that the new system will have worse error-reporting and so forth initially.
We may want to create a branch (say, on my fork) so that we can land PRs without waiting for bors. When bootstrapping NLL, I found that was helpful for enabling more people to collaborate on a branch. The idea would then be that we move things back from the fork to master on a regular basis.
How to manage assignments.
We’re still in this annoying bootstrapping phase. The single biggest task – somewhat gated on me? but hopefully we can change that over time – is to start creating subtasks. I will try to devote energy to that today/tomorrow/over-the-weekend, so that on Monday we can really have a list of specific tasks to drill into and assign.