Announcing: Traits Working Group

Are you interested in improving the trait system?

I’m trying to start up a new working group dedicated to “all things trait”. To start, the focus is going to be on the trait system implementation, but I expect our work to eventually feed back into the design of language features, by proving a better platform for experimentation as well as clearer ideas of the limits of what is possible.

Expectations

The main thing I am looking for is the ability to commit some time on a regular basis. You don’t have to be an expert on logic programming, though obviously a familiarity with rust and/or the trait system helps. We can mentor you through the rest.

Goals

  • Faster performance and caching
    • Eliminating or mitigating known exponential worst cases
      • though maybe @ishitatsuyuki has this under control for the current trait system implementation
  • Fewer bugs
  • No cycle errors
    • Lazy normalization
  • Better and custom error messages
    • libs like e.g. Diesel ought to be able to give domain-specific errors
  • Strong foundation for future changes
    • specialization, Generic Associated Types (GATs), abstract type, to name a few
    • coherence improvements
      • e.g., some form of negative reasoning

The vision from 10 kilofeet

The high-level vision is to realize the “Chalk plan”. This means that we abstract how traits work by lowering them internally to a simpler logic, and then extend the compiler with a really efficient solver for that logic. This should allow for us to do much more caching than before, while also letting us add new features with relative ease (because they can be lowered to the intermediate IR without extending the base system). Chalk has at this point more-or-less proved that this approach can work, I think. Eventually, this should allow us to not only do more, but do it more efficiently.

The vision from 1 kilofeet

OK, let’s drill in a bit more. These are the major pieces that I see right now. I expect the details to evolve over time.

Trait queries and canonicalization. One of the key ideas is to extract today’s trait operations into cacheable queries – roughly as described in this blog post on chalk, though the details of what I have in mind have evolved since then. A key aspect of this is canonicalization, which allows us to take a goal that must be proven and isolate it from the surrounding inference context C; we can then solve it in isolation (caching the result) and re-apply the results to the inference context C (also discussed in that same Chalk blog post). A basic version of this technique is implemented in my recent PR, which uses it to cache associated type normalizations as well as some aspects of the dropck (resulting in big speed gains for NLL).

Universe-based handling of higher-ranker bounds. The current system for higher-ranked bounds is restrictive and won’t scale to things like for<T: Debug> X: Trait<T>, which is needed by generic associated types. It also doesn’t allow us to normalize under binders. Moving to a universe-based system should allow us to readily address these shortcomings – @sgrif has laid a lot of the foundation in this PR.

Fully region-erased type check. Right now, the type checker also does lexical region computation. Then the NLL type checker, if enabled, erases all of those regions and starts over. Once the universe-based handling is complete, we ought to be able to completely erase regions from the initial type check (including binder sites, I hope). This will simplify the initial type checker substantially, but also allow for better caching.

On-demand solver. The idea is to – eventually – move rustc over to use a Chalk-like solver, probably some variant of the "on-demand SLG solver". This solver has a number of advantages. Among them, it naturally handles cycles without resorting to cyclic errors and it is “complete” (always finds an answer, even when internal queries do not have a single “unique” answer). The design also avoids “ratholing” on queries that will produce tons and tons of answers, like ?T: Sized, although there are still some ways to do that better.

Lazy normalization. A key goal of this branch is to refactor how we handle projections to match what Chalk does. In Chalk, the type-checker never normalizes: instead, when it is asked to equate a projection like <T as Foo>::Bar with some other type U, it creates a new query ProjectionEq(<T as Foo>::Bar = U). This can be solved by normalizing, but also by “skolemizing” (that is, if U is made equal to the “unnormalized” <T as Foo>::Bar). The ambiguity inherent in this technique (there are often two ways to solve a projection-eq constraint, and no “best” way) relies on the on-demand solver.

Compiling to an internal representation. At present the trait solver is more of an “interpreter” – it still operates in terms of Rust-level concepts like traits and impls. I’d like to move towards the way that Chalk does things: we first lower traits, impls, projections, etc into Prolog-like predicates, which simplifies the solver greatly. You can think of this as a kind of MIR for traits.

Higher-order pattern unification. Eventually, to support abstract type, we’re going to need to move towards higher-order pattern unification – a subset of full higher-order unification.

Implied bounds. @scalexm more or less “cracked the nut” on how implied bounds ought to work, but the plan relies on Chalk’s on-demand SLG solver.

Constraint-based type-checking. Once we have this infrastructure humming, the holy grail is to rework our type-checker so that it is expressed more and more in terms of predicates. This will help us to overcome some of the limitations we see today (e.g., the way that you can’t invoke a method like foo.bar() unless the type of foo is known at that point, or the way that coercions "eagerly" commit). We’ll have to be careful here though to avoid compilation time blow-up.

The first few steps

Actually, the first few steps are underway already. sgrif has been moving towards universe-based handling of bounds, and my recent PR adds the idea of canonicalization. And most of this vision is working now in chalk already. There are some obvious next steps building on those PRs: for example, canonicalization can (possibly) be made more efficient by using a hashing scheme.

Probably the best next step is to make more use of the canonical/query system. For example, it’d be great to move the type checker over to use it instead of the existing fulfillment context (i.e., the type checker would keep only the top-level things it needs to prove). This will require experimentation.

Simultaneously, it makes sense to pursue some prototypes in chalk. A number of Rust features are not yet modeled, such as impl trait or higher-order pattern unification.

How to get involved

If this sounds interesting, check out the tracking issue or drop in on the gitter channel. You can also leave a comment here. Right now, things are pretty bare – I’ll be trying to flesh things out more over the coming days, and of course I’d love to get help with that too. =)

28 Likes

I’d like to help after finishing with NLL.

1 Like

I don’t know exactly how I can contribute, but I’m on board to help however I can.

1 Like

I am surely want to help. I have some knowledge of compiling and worked with Rust for awhile.

So let me know what I can do.

1 Like

Hope I am not very late for this, I would like to help in improving the rust trait system.

Not too late. We're just "getting going" =)

See also this thread

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.