Spiking chalk


This is a running thread devoted to the goal of spiking a chalk-style trait solver in rustc. As not everyone is able to attend the meetings, I intend to post updates in here for asynchronous consumption and further discussion. There’s also a Dropbox Paper document.

We had our first meeting yesterday. I will post the notes shortly.


What exactly do you mean by “spiking”? From the subject line I thought you meant the chalk project was going to be abandoned, because journalists say a story has been “spiked” when the editor decides not to publish it.


I’ve usually heard the time spike used in software engineering to mean that you are pushing to get some particular feature implemented, and then coming back and filling in all the stuff around it. Think “Minimum Viable Product” or something.

The metaphor is that we are taking a spike down through the source, doing just what is needed. I don’t know, it’s a weird metaphor now that I think about it. =)


To me a “spike” is a work item in the scrum branch of agile methodology which cannot and does not get a story point estimate, typically an open-ended research task whose output is a non-spike work item along with enough information to give it an estimate.

We programmers could do better with our jargon.


OK, I’m finally getting to this write-up.

Regular meetings:

We decided to meet at Monday at 19:00 UTC on a regular basis to check-in. If you would like to be added to the calendar invite, let me know somehow.

How to subdivide the work:

The highlight was that we broke the work down into a few parts (I’m going to mildly extend this list beyond what was discussed in the mtg, actually):

  • Move existing trait selection into a query
  • Specify and start implementing lowering rules
  • Define connection between the Chalk SLG solver and rustc

Move existing trait selection into a query:

Right now, the trait solver kind of executes “inline” with other queries:

fn typeck_query() {
    let inference_context = InferenceContext::new();
    do_trait_solving(inference_context, some_goal)

The idea is to use the canonicalization primitives introduced in PR 48411 to break this out into a rustc query. The new setup might look something like this (some docs are in this rustc-guide PR but more are needed):

fn typeck_query() {
    let inference_context = InferenceContext::new();
    let (canonical_goal, values) = canonicalize(some_goal);
    let canonical_result = trait_solving_query(canonical_goal);
    apply_query_result(canonical_result, values);

fn trait_solving_query(canonical_goal) -> CanonicalResult {
    let inference_context = InferenceContext::new();
    do_trait_solving(inference_context, some_goal);

This new setup enables several things:

  • the memoized rustc queries integrate with incremental and provide caching
    • (the SLG solver also caches internally though)
  • canonicalization lets us more readily cache across functions and more broadly
  • the queries provide an abstraction barrier behind which we can tinker with the trait implementation as much as we like
    • my eventual goal: move as much out into the chalk crate as we can, behind a strong VM-like abstraction barrier

First steps:

  • Land #48411
  • Converting evaluate_predicate into a query is a great first step (#48536)

Ownership and goals:

@nikomatsakis will try to create more issues in this area before next meeting.

Specify and start implementing lowering rules

A big part of chalk has been defining the rules for impls and things in a more abstract fashion. Right now that code is kind of messy and lives in Chalk’s lower module. My rustc-guide PR includes a preliminary write-up of these rules as well.

One question that was raised here: can we abstract the code for this lowering out, so that it can be shared between rustc and chalk? Currently the Chalk code is sort of messy anyway. Overall, this is a bit unclear, as it has a lot to do with the specifics of the various data structures used in the compiler/chalk, but maybe it can be done.

Some possible work to do:

  • Refactoring the structure of ty::Predicate (#48539)
  • Other refactorings? We need to think a bit about what data structures are needed in rustc, that should enable us to break out specific work items
  • Experiment with extacting Chalk’s lowering rules into a separate crate that is generic about its inputs
    • or reimplementing from scratch!

Define connection between the Chalk SLG solver and rustc

The Chalk’s SLG solver is now extracted into its own crate, chalk-slg. This crate is defined with respect to a generic interface defined by traits. This generic interface needs more comments (story of my life). The hope is that we can share this solver code between chalk and rustc, just with two different instantiations of that trait.

There is some work to be done in making this happen:

  • We need a “rental-like” version of InferCtxt (maybe rental itself can be used, though from quick inspection that was not obvious). I will try to write-up an issue about this shortly, it might be a good starting issue.
    • The problem is that the inference context today is always tied to a closure, but we will want to be able to create an inference context, do work in it, and then return – then come back and do more work. This is because the SLG solver is kind of an “async” style.
  • Comment the interface more thoroughly, perhaps simplify it (e.g., there are empty traits that could in some cases be dropped).
  • We can then start defining the various types and operations needed by the interface and see how it maps.

How to manage the code:

As usual, we want to do our development “on master”, kind of co-existing with the existing trait system. To that end, we will probably want to introduce a flag (-Zchalk) that enables the new trait system implementation. The expectation is that the new system will have worse error-reporting and so forth initially.

We may want to create a branch (say, on my fork) so that we can land PRs without waiting for bors. When bootstrapping NLL, I found that was helpful for enabling more people to collaborate on a branch. The idea would then be that we move things back from the fork to master on a regular basis.

How to manage assignments.

We’re still in this annoying bootstrapping phase. The single biggest task – somewhat gated on me? but hopefully we can change that over time – is to start creating subtasks. I will try to devote energy to that today/tomorrow/over-the-weekend, so that on Monday we can really have a list of specific tasks to drill into and assign.


Thank you @nikomatsakis for investing so much in people!


So @scalexm and I had a short meeting yesterday to try to plan out next steps in more detail. This post summarizes our thoughts – after writing it, I plan to go and open corresponding issues in the Rust tracker where appropriate.

(If you are interested, the meeting notes can be found here, but they aren’t really meant to be decipherable to anyone who wasn’t at the meeting.)

Ultimately, we came up with the following plan:

Minimal PR for lowering rules

First, to @scalexm is going to take a shot at creating a “minimal PR” that creates “lowering rules” in rustc, hopefully this weekend. The goal is to create “just enough” framework that we can make up an issue and others can start adding to it, and thus getting their hands dirty with the newer system.

The target of this PR is to have a system for lowering Rust items into traits. We are imagining a rustc query like program_clauses(DefId) where DefId identifies a trait, associated type declaration, etc – that would return back a set of clauses roughly as described in my rustc-guide PR. This would then be coupled with a unit test framework using #[rustc] attributes; these attributes would run the lowering and emit the results as errors. This is a common trick in rustc that enables you to write compile-fail/ui unit tests like this:

trait Foo { }
//~^ ERROR forall<Self> { Implemented(Self: Foo) :- FromEnv(Self: Foo) }

fn main() { }

(Note: as part of this PR, we would wind up duplicating – private to the librustc_traits crate, I believe – some portion of existing types. Notably, the types to represent goals and so forth. This is a bit of a shift from my previous plans, where I had expected to re-use the existing ty::Predicate type, we are going to just create our own that matches the Chalk types more closely. This means that the grungy refactor in #48539 is just not needed (at least not yet). This seems fine for our purposes of spiking to a working system – of course eventually we’ll have to reconcile the two types.)

Other work items

In addition to that minimal PR, there are a number of other work items, some of which I think are ripe for mentoring. Some of them have issues on the rustc repo already and some don’t. I’ll try to open issues today for things that don’t and post back here when done.

Short-lived inference contexts

Update: I opened up Issue #48895, which is a good starting point for anyone looking to get involved. The goal is to create a layer of abstraction between the type checker and the trait solving, so that we can later switch the trait solver that is being used at will.

Announcing: Traits Working Group

We had another meeting. You can find the notes in the DropBox paper document. That includes a summary of what happened and some notes on goals for this coming week. I’m hoping to get more issues opened up – both on rustc and in the chalk repo.

I have a few questions I’d like feedback on:

  • What topics would be good to cover in the rustc-guide?
    • If you’ve not seen what I wrote already, please do check it out.
  • Is there anything I could be doing that would help you personally get involved (that I’m not already doing)? It seems to me that we’re gated to some extent on actionable issues, and we’re working on that, but I’m wondering what else I might be overlooking.


I’ve added some links to chalk and ena issues btw into the main tracking issue for the traits working group.