WG-Traits Minutes and General Discussion

This is a running thread devoted to the goal of spiking a chalk-style trait solver in rustc. As not everyone is able to attend the meetings, I intend to post updates in here for asynchronous consumption and further discussion. There’s also a Dropbox Paper document.

We had our first meeting yesterday. I will post the notes shortly.

6 Likes

What exactly do you mean by “spiking”? From the subject line I thought you meant the chalk project was going to be abandoned, because journalists say a story has been “spiked” when the editor decides not to publish it.

2 Likes

I've usually heard the time spike used in software engineering to mean that you are pushing to get some particular feature implemented, and then coming back and filling in all the stuff around it. Think "Minimum Viable Product" or something.

The metaphor is that we are taking a spike down through the source, doing just what is needed. I don't know, it's a weird metaphor now that I think about it. =)

3 Likes

To me a “spike” is a work item in the scrum branch of agile methodology which cannot and does not get a story point estimate, typically an open-ended research task whose output is a non-spike work item along with enough information to give it an estimate.

We programmers could do better with our jargon.

1 Like

OK, I’m finally getting to this write-up.

Regular meetings:

We decided to meet at Monday at 19:00 UTC on a regular basis to check-in. If you would like to be added to the calendar invite, let me know somehow.

How to subdivide the work:

The highlight was that we broke the work down into a few parts (I’m going to mildly extend this list beyond what was discussed in the mtg, actually):

  • Move existing trait selection into a query
  • Specify and start implementing lowering rules
  • Define connection between the Chalk SLG solver and rustc

Move existing trait selection into a query:

Right now, the trait solver kind of executes “inline” with other queries:

fn typeck_query() {
    let inference_context = InferenceContext::new();
    do_trait_solving(inference_context, some_goal)
}

The idea is to use the canonicalization primitives introduced in PR 48411 to break this out into a rustc query. The new setup might look something like this (some docs are in this rustc-guide PR but more are needed):

fn typeck_query() {
    let inference_context = InferenceContext::new();
    let (canonical_goal, values) = canonicalize(some_goal);
    let canonical_result = trait_solving_query(canonical_goal);
    apply_query_result(canonical_result, values);
}

fn trait_solving_query(canonical_goal) -> CanonicalResult {
    let inference_context = InferenceContext::new();
    do_trait_solving(inference_context, some_goal);
    canonicalize(result)
}

This new setup enables several things:

  • the memoized rustc queries integrate with incremental and provide caching
    • (the SLG solver also caches internally though)
  • canonicalization lets us more readily cache across functions and more broadly
  • the queries provide an abstraction barrier behind which we can tinker with the trait implementation as much as we like
    • my eventual goal: move as much out into the chalk crate as we can, behind a strong VM-like abstraction barrier

First steps:

  • Land #48411
  • Converting evaluate_predicate into a query is a great first step (#48536)

Ownership and goals:

@nikomatsakis will try to create more issues in this area before next meeting.

Specify and start implementing lowering rules

A big part of chalk has been defining the rules for impls and things in a more abstract fashion. Right now that code is kind of messy and lives in Chalk’s lower module. My rustc-guide PR includes a preliminary write-up of these rules as well.

One question that was raised here: can we abstract the code for this lowering out, so that it can be shared between rustc and chalk? Currently the Chalk code is sort of messy anyway. Overall, this is a bit unclear, as it has a lot to do with the specifics of the various data structures used in the compiler/chalk, but maybe it can be done.

Some possible work to do:

  • Refactoring the structure of ty::Predicate (#48539)
  • Other refactorings? We need to think a bit about what data structures are needed in rustc, that should enable us to break out specific work items
  • Experiment with extacting Chalk’s lowering rules into a separate crate that is generic about its inputs
    • or reimplementing from scratch!

Define connection between the Chalk SLG solver and rustc

The Chalk’s SLG solver is now extracted into its own crate, chalk-slg. This crate is defined with respect to a generic interface defined by traits. This generic interface needs more comments (story of my life). The hope is that we can share this solver code between chalk and rustc, just with two different instantiations of that trait.

There is some work to be done in making this happen:

  • We need a “rental-like” version of InferCtxt (maybe rental itself can be used, though from quick inspection that was not obvious). I will try to write-up an issue about this shortly, it might be a good starting issue.
    • The problem is that the inference context today is always tied to a closure, but we will want to be able to create an inference context, do work in it, and then return – then come back and do more work. This is because the SLG solver is kind of an “async” style.
  • Comment the interface more thoroughly, perhaps simplify it (e.g., there are empty traits that could in some cases be dropped).
  • We can then start defining the various types and operations needed by the interface and see how it maps.

How to manage the code:

As usual, we want to do our development “on master”, kind of co-existing with the existing trait system. To that end, we will probably want to introduce a flag (-Zchalk) that enables the new trait system implementation. The expectation is that the new system will have worse error-reporting and so forth initially.

We may want to create a branch (say, on my fork) so that we can land PRs without waiting for bors. When bootstrapping NLL, I found that was helpful for enabling more people to collaborate on a branch. The idea would then be that we move things back from the fork to master on a regular basis.

How to manage assignments.

We’re still in this annoying bootstrapping phase. The single biggest task – somewhat gated on me? but hopefully we can change that over time – is to start creating subtasks. I will try to devote energy to that today/tomorrow/over-the-weekend, so that on Monday we can really have a list of specific tasks to drill into and assign.

6 Likes

Thank you @nikomatsakis for investing so much in people!

5 Likes

So @scalexm and I had a short meeting yesterday to try to plan out next steps in more detail. This post summarizes our thoughts – after writing it, I plan to go and open corresponding issues in the Rust tracker where appropriate.

(If you are interested, the meeting notes can be found here, but they aren’t really meant to be decipherable to anyone who wasn’t at the meeting.)

Ultimately, we came up with the following plan:

Minimal PR for lowering rules

First, to @scalexm is going to take a shot at creating a “minimal PR” that creates “lowering rules” in rustc, hopefully this weekend. The goal is to create “just enough” framework that we can make up an issue and others can start adding to it, and thus getting their hands dirty with the newer system.

The target of this PR is to have a system for lowering Rust items into traits. We are imagining a rustc query like program_clauses(DefId) where DefId identifies a trait, associated type declaration, etc – that would return back a set of clauses roughly as described in my rustc-guide PR. This would then be coupled with a unit test framework using #[rustc] attributes; these attributes would run the lowering and emit the results as errors. This is a common trick in rustc that enables you to write compile-fail/ui unit tests like this:

#[rustc_program_clause]
trait Foo { }
//~^ ERROR forall<Self> { Implemented(Self: Foo) :- FromEnv(Self: Foo) }

fn main() { }

(Note: as part of this PR, we would wind up duplicating – private to the librustc_traits crate, I believe – some portion of existing types. Notably, the types to represent goals and so forth. This is a bit of a shift from my previous plans, where I had expected to re-use the existing ty::Predicate type, we are going to just create our own that matches the Chalk types more closely. This means that the grungy refactor in #48539 is just not needed (at least not yet). This seems fine for our purposes of spiking to a working system – of course eventually we’ll have to reconcile the two types.)

Other work items

In addition to that minimal PR, there are a number of other work items, some of which I think are ripe for mentoring. Some of them have issues on the rustc repo already and some don’t. I’ll try to open issues today for things that don’t and post back here when done.

2 Likes

Update: I opened up Issue #48895, which is a good starting point for anyone looking to get involved. The goal is to create a layer of abstraction between the type checker and the trait solving, so that we can later switch the trait solver that is being used at will.

We had another meeting. You can find the notes in the DropBox paper document. That includes a summary of what happened and some notes on goals for this coming week. I’m hoping to get more issues opened up – both on rustc and in the chalk repo.

I have a few questions I’d like feedback on:

  • What topics would be good to cover in the rustc-guide?
    • If you’ve not seen what I wrote already, please do check it out.
  • Is there anything I could be doing that would help you personally get involved (that I’m not already doing)? It seems to me that we’re gated to some extent on actionable issues, and we’re working on that, but I’m wondering what else I might be overlooking.

I’ve added some links to chalk and ena issues btw into the main tracking issue for the traits working group.

Reminder: meeting today at 18:00 UTC (https://appear.in/rustaceans)! Please update the paper document as appropriate.

cc @giannicic, @scalexm, @aravindpg, @ishitatsuyuki, @PramodBisht, and anyone else who may have updates. =)

I sent a message over gitter, but: meeting today, if you can make it!

I sort of failed to produce an update after the last meeting, sorry about that. Will try to do better this time. =)

Has anybody looked at what Haskell’s GHC compiler does?

If Haskell’s typeclasses + type families are as expressive as Rust’s traits and GHC works properly, then we could just copy whatever GHC does instead of reinventing it.

Update from 2018.04.02 meeting

Updates. There are a number of exciting updates from last week in the dropbox paper. If you’re enjoying following along on everyone’s progress, definitely check it out!

Goals for this week. This week, the focus will be on pushing towards connecting rustc and the SLG solver. Here are the highlights of work to be done:

  • We should finish off the chalk lowering rules:
    • Rule WellFormed-TraitRef
    • Rule ProjectionEq-Normalize
    • Rule ProjectionEq-Skolemize
  • Urgent: We need a query to figure out the program clauses that can solve a given domain goal (#49600). I’ve started this and can keep hacking on it, but if someone is keen and has some spare time over this week, I’d love to collaborate, as I’m pretty hard up for time right now.
  • The GAT rules that @scalexm was hacking on need to be written out in rustc-guide and implemented in rustc.

Things that need documenting. The traits-guide coverage of higher-ranked bounds and universes is not really done. There is some coverage under region solving, but it needs to be moved into the trait section. I’d be happy to work with someone who wants to learn that stuff and hence keep extending the trait docs.

And a poll! There is a question I forgot to raise! We sometimes use the term “skolemized” in Chalk and rustc to refer to “free, universally quantified names”. So for example if we solving a higher-ranked trait bound like for<'a> { T: Foo<'a> }, we would substitute a “skolemized region” in for 'a, yielding T: Foo<'!0>. These !-types and regions (not to be confused with the never type…) are kind of a representative that stands in for “some unknown lifetime”; it works exactly like talking about T in the body of a generic function fn foo<T> { ... }.

Anyway, the name skolemized is not good for two reasons:

  • It is confusing to newcomers and very obscure.
  • We’re probably using it wrong anyway. From what I can tell, skolemization is generally defined as removing existential quantifiers and replacing them with universal ones, but here we are…not doing that.

So, what I am leading up to here is: what name should we adopt instead? I have been torn between “forall type/region” and “universal type/region”. I want something that is at least somewhat intuitive. I think another term sometimes used is eigenvariable, but that does not seem more obvious. I’m leaning towards ‘forall variable’, on the basis that “universally quantified” is more jargon-y than “forall”, but…

forall type/region gets my vote. It most closely ties it to the actual syntax it represents in Rust. eigenvariable has a weird conflict w/ eigenclass in Ruby for me.

2 Likes

Matsakified :slight_smile:

In seriousness, I think I would prefer something that lends itself nicely to being abbreviated. Also, the name should make it clear that the region is not universally qualified, which is perhaps a bit counterintuitive from a name like "forall variable"... maybe something like "dequalified variable" or "de-foralled"?

2 Likes

Would “wildcard type” or “wildcard region” work? It ties into (I think) a familiar term for most programmers. Perhaps it doesn’t carry enough of the underlying meaning, though.

IIRC, it is an abstraction of a specific item, so perhaps just prefixing the item noun (or other term) with the qualifier „abstract" would suffice.

I feel like that is too abstract :stuck_out_tongue: