Splitting compiler team meetings into proactive/reactive

The current compiler team meetings are entirely reactive: we review P-high bugs, address nominated questions, etc. This is vital. But we rarely take the time to proactively do design work. I think there are lots of interesting questions to discuss in depth about how to structure the compiler. Moreover, as @arielb1 pointed out to me, there is a certain amount of overlap between the compiler team meetings and the #rust-triage meetings that take place every two weeks.

Therefore, I am thinking that perhaps we should say that, on those weeks when #rust-triage takes place (i.e., every 2 weeks), we will schedule a proactive design discussion. We should schedule these topics in advance so that people have some time to prepare. The idea would be to go over some planned changes, discuss the overall design, and make sure weā€™re all satisfied.

Typically, I would say that such discussions work best over some kind of video chat. But Iā€™m not entirely sure, maybe IRC is suitable. If did opt for video chat, Iā€™m not sure what would be the best system ā€“ Vidyo is sort of ā€œmozilla onlyā€. WebRTC-based things are linux friendly, but donā€™t scale well to more than a few participants in my experience. Google Hangouts might work but doesnā€™t work in Firefox right now (sigh).

Thoughts?

6 Likes

We decided that since next week is #rust-triage, we will make our first effort at a proactive compiler team meeting be next Thursday, May 4th, at the usual time (4pm Boston time). We will discuss over IRC and see how it goes.

The topic will be RLS integration into the compiler and @eddyb and @jntrnr are going to sync up before hand to prepare a bit of a plan.

We may also discuss other things, of course.

3 Likes

@alexcrichton points out to me that there have been proposals to add clippy to the repo as well ā€“ we might want to talk a bit about that at the same time.

3 Likes

That sounds interesting! I would love to participate (mostly listening :slight_smile: ) in the discussion as well.

Where exactly on IRC the meeting will be held? Please be specific, I am not very comfortable with IRC yet :slight_smile:

Timezone-friendly time: https://www.timeanddate.com/worldclock/fixedtime.html?msg=Compiler+team+meeting&iso=20170504T16&p1=43

Itā€™s on the regular #rustc IRC channel (which can be found here https://www.rust-lang.org/community.html, for example).

2 Likes

The main question I had for @jntrnr, which I hope he can answer ahead of the meeting is ā€œwhat does RLS want, in the short/medium-term, to ask the compiler?ā€

That is, assuming the compiler does good across-compilation caching and incremental recomputation of various intermediary results (called ā€œqueriesā€ currently) and that it is IDE-friendly (i.e. suitable for running on code while editing said), what information would you want to query it for on behalf of the IDE?

Feel free to come up with your own list, especially if you feel like RLS doesnā€™t align perfectly with your own goals.

1 Like

One note here: weā€™ve been talking about trying to revamp rustdoc to work through the RLS soon, which likely has some implications for this question as well.

Indeed, that came up with both @jntrnr and @steveklabnik.

One thing to note there is that a lot of the semantic mismatch with rustdoc is due to the erasure of type aliases in the typesystem, for which @nikomatsakis is content with having an alias kind of projection.

Not needing the RLS to implement rustdoc in a satisfactory manner is very important IMO, because RLS and other tooling gets all that for free.

Would someone mind correcting the typo in the title? The current one seems a bit weird to me.

Here you are :slight_smile:

Question from an IDE to the compiler.

I'll be using the term "IDE" for a thing which can semantically analyze source code and use the results of the analysis to help the user to write code in the editor. In practical terms, "IDE" is the backend of RLS (rls-analysis) and IntelliJ Rust.

IDE maintains complete and precise parse tree, so questions can be formulated in terms of an AST (but it's convenient to use spans/offsets to refer to AST nodes). That is, IDE will ask "what declaration does this name refer to", but it won't ask "where are the references in this file" (it already knows) or "what attributes are available on this function" (it's easy to calculate syntactically).

Collect Errors

What are the compiler errors in file "foo/bar/baz.rs"?

This query is needed to show errors as the user types.

I think it's important to restrict the scope of the query to a single file (errors on demand), to avoid checking the majority of the project, which can be costly (O(size of the project)) even with fully incremental compilation.

This query can be fully async and not very fast, as it does not block any user interaction.

From the user's POV, every error should be accompanied with a quick fix: an action to automatically fix errors, perhaps with some additional input from the user. In the ideal world, we would like to keep errors in the compiler and quick fixes in the IDE (that is, we want to free the compiler from the knowledge of how to edit and refactor code). It's not obvious how to achieve this separation though: some information which is needed to propose a fix is available just when compiler decides that there's an error.

Resolve Reference

What definition does this reference points to?

I expect this to be the most popular query. IDE will usually ask about all the references in a single file (to do highlighting) and will often ask about single references here and there in different files all over the project. This query must be as fast as possible. Ideally, on the IDE side we would love to see a synchronous api like reference.resolve() -> Option<Declaration>.

This query should also handle ambiguity. If the name is ambiguous (the name can mean two different method from different traits, and UFCS is needed), it should be possible to get all possible targets: reference.multi_resolve() -> Vec<Declaration>. This should also handle use declarations, which can imoprt a name from different namespaces.

Note that the compiler does not need to handle the reverse query. That is, "find usages" can (and probably should) be handled by the IDE. The IDE maintains a text index of all references in project (cheap to maintain, because it depends only on the syntactic contents of a single file). For find usages, IDE finds the list of candidates using this text index, and then filters the candidates invoking compiler's "Resolve Reference" query (it should be fast, and again even for "random access" it should be possible to avoid analyzing the majority of the project).

Collect Visible Names

What can be the name of this reference such that it resolves to anything?

That is, for the code

struct S;
impl S {
    fn foo(&self) {}
    fn bar(&self) {}
}

fn f(s: S) {
    s.foo // <- `.foo` is the reference we will be asking about
}

the answer should be [foo, bar]. Basically, this question gives completion variants without filtering by actual reference name. It can be seen as a more general version of the "Resolve Reference" query. In fact, in IntelliJ resolve reference is usually implemented as "Collect Visible Names" followed by filtering by the actual method name.

This question will be asked about at most one identifier in the file, it should be reasonably fast, but can be async (ideally, some names are available immediately, and the others are supplied in the async fashion).

What If Query

The queries should be available in the "what if" mode. That is, IDE should be able to inject some Rust code fragment anywhere into the existing project, and ask questions about it. It can be trivially implemented as "modify file, ask question, rollback modification", but it makes sense to build in some support for this, because you can avoid a lot of invalidation work (and "what if" queries may be asked in the context that blocks user's actions, so they need to be as fast as possible).

An example of what if query would be a completion request for the following code, with the caret just after the dot.

fn f(s: S) {
    s.
}

Here, the IDE can't ask "Collect Visible Names" query, because there's no reference yet. So, IDE inserts a dummy identifier

fn f(s: S) {
    s.rust_rulezz
}

and asks "Collect Visible Names" for it.

Type related queries

What is the type of this expression? Does this type implements this trait?

These queries will be a little awkward because the answer is not just some node in the existing AST, but some representation of the thing that exists only in compiler's memory.

I am not sure about What traits are implemented by this type query. This is a bad type of "search" query, which requires looking at the whole project, but looks like it must be implemented in the compiler anyway to do "Collect visible names" (last time I looked, the compiler filtered the candidate traits by the name of the method resolved, but this won't work for completion).

3 Likes

@jntrnr, @eddyb and I took some time to hash out and try to get a rough idea of what ā€œthe ideaā€ really is. Here are our notes:

https://public.etherpad-mozilla.org/p/rust-compiler-meeting-2017-05-04

The key ideas here:

  • We described the high-level kinds of queries that RLS wants to do
  • We presume that weā€™ll break these down into more fundamental operations compiler will support
    • General idea:
      • some query to go from a span to an (essentially opaque) id (DefId, HirId)
      • then various queries to go from the id to something richer (e.g., to get the Ty of an expr from its HirId)
      • and some amount of ā€œcollectionā€ queries (e.g., find all methods invokable at this position on a value of type Ty)
    • Compiler team would keep those basic queries compatible and working
    • Eventually, should have a strong suite of unit and performance tests

Some concerns:

  • Have to work out how we will distribute results from libstd, manage interaction with RLS, etc
1 Like

One thing to note, before I look at the rest of your comment: rustc doesn't care about files (but this is good, read on for why).

Also, diagnostic gathering consensus seems to be growing towards "start fine-grained analysis queries (e.g. typeck & borrowck) in the background, but in a way you can interleave with user queries".

We could do this for the current editor view (or rather, definitions that intersect that view), not just the current file, without more complexity, even while scrolling. By starting at the current view and alternating before/after, to fill in the whole file, then the files around it in the same directory (or in adjacent tabs, or immediate users of changed signatures, etc., there are multiple heuristics here), and so on, we could go through the entire crate, without noticeable downtime.

Incremental recompilation in this "query engine" model involves skipping some of that fine-grained work when the state it has observed last time hasn't changed, so the "outward spiral traversal" could be very cheap on most keypresses (main barrier to that is macro expansion right now).

2 Likes

I disagree. There are several mechanisms at play here and the compiler could do automatic incremental updates for cached such queries.

However, it could be prototyped just fine in the RLS (which, btw, is what IntelliJ should use IMO - not sure there's any point in skipping the LSP and touching the raw compiler internals directly - which is what we hope RLS will use internally).

Here I agree that the filter is not fundamental. It might be useful for performance, but FWIW the compiler goes through the whole list, every single time, right now (for method calls, field accesses, etc.) - that's a common theme, rustc is pretty close to the ceiling of straight-line code, and caching is where most of the wins nowadays come from.

The only support I could possibly see for this would be "forking" incremental state, so the "rollback" is effectively free (other than using some extra memory but... actual process-level fork does CoW so it might work very well).

Ah there's no point to have it that convoluted: as @nikomatsakis has already pointed out, the completion query would be split: first you get the type from the expression, which should work even if parse error recovery had to be performed to get some sort of an AST, and the field/method completion query would be type-based, so there's no need for that fake identifier.

We have no plans for making the compiler as a whole more granular than crates. Queries, OTOH, can be pretty specific, and we already only look at the impl blocks for that trait to do the on-demand coherence & impl "binning" (e.g. an impl Trait for Vec<...> won't be seen when looking for Box<X>, but impl Trait for Box<Y> would be) - so I don't expect that to be slow at all.

2 Likes

This sounds about perfect! The similar "spiral" is how the errors are highlighted in IntellIJ natively. An error annotator is a visitor which take an AST node and the error sink, IntelliJ manages launching visitors for the elements in the current viewport, different error annotators share basic information about name resolution/type inference. However, we do stop at the file boundary: errors in other files are not that much useful while you are not finished with the current one.

The nice thing about stupid text/AST based index is that the update is guaranteed to be fast. I worry about the worst case behavior of incremental computation in general.

And this should exactly duplicate the name resolution logic. Faking identifier helps to simplify parser recovery and to make completion and name resolution exactly the same. It's only a trick of course, but a nifty one :slight_smile: In IntelliJ, we attach cached info to AST nodes, so when you create a "forked" file with fake identifier, the cache of the nodes of this file is empty, but all the other caches are intact.

Some thoughts on the current ideas for making the RLS work better with incremental compilation.

Current situation

(I realise I should go into this in more detail at some point, maybe a blog post soon).

  • The RLS is a long running service, it invokes Cargo to find out how to build a project and to build the dependent crates. It invokes rustc (in process) to compile the primary crate. Each run of rustc is short-lived. The RLS coordinates rebuilds as necessary.
  • The compiler compiles a crate (we donā€™t run trans, so incremental is not an issue).
  • We run save-analysis as the last step in compilation. This does some pretty simple processing of the compilerā€™s data structures to generate a somewhat abstracted view of the program.
    • For dependent crates, we dump all this info into a JSON file. We donā€™t then need to rebuild dep crates on new sessions.
    • For the primary crate, we pass the same data structures in memory to the RLS.
    • There is also an API for compiler plugins to use that gets save-analysis data for one node at a time, but the RLS doesnā€™t use this.
  • The rls-analysis crate (part of the RLS) does some more processing across all crates. This includes cross-referencing for things like ā€˜find all refsā€™. Data is stored in memory. We can update this data but only a whole crate at a time (no incremental update of per-crate data). We donā€™t have to redo processing for all crates though.
  • The RLS responds to API calls from the IDE to access this data.

Note that the RLS ensures data is always available for the IDE after the first rebuild. Even when we are re-compiling or re-processing data, the old data is still available.

In terms of time, compilation is slow (minutes for a large crate), passing data is a trivial amount of time, reading data (only dep crates and only once) is non-trivial, but fast, processing data is fast-ish (tens of seconds for a large crate, more often small number of seconds), replying to IDE queries is very fast (we only do very fast processing here, essentially just string manipulation).

Advantages

  • Abstraction layer between tools and the compiler - reusable interface (rustw, rustdoc).
  • Separation of concerns between building and managing builds.
  • Compiler can stick to traditional (batch) compilation model.
  • Very fast to query (e.g., instant ā€˜find all refsā€™).
  • Works with std libs and crates without source available.
  • Itā€™s actually implemented and works pretty well.

Issues

  • There are some queries where this model is not a good fit - e.g., code completion (currently using Racer), auto-import (not yet implemented), etc.
  • No clear path forward for working with incremental compilation.
  • There is no single source of truth - the compiler and RLS have independent sets of data.
  • Slow to build - long wait on startup, plus lots of processor use for rebuilds (power concerns).
  • Not quick enough to respond when working with large crates (this is rarely a problem in practice, however).

Motivation

Incremental type checking is crucial to having a good IDE experience and is planned to be implemented soon. However, the current RLS is not well-placed to utilise it when it arrives. We need to consider either changing the model of the RLS to be more compiler-focused or find a way to make the current model work with incremental compilation.

Somewhat orthogonally, to support compiler-backed code completion and similar queries we need to be able to ask the compiler about individual bits of code, rather than getting info about the whole crate. This is actually quite easy in the current model, however, we really want to be able to ask these questions without rebuilding.

There seems to be some feeling that a different model for the RLS would be just generally better in some way. I donā€™t believe this is true. Obviously by taking advantage of incremental compilation, the RLS will be quicker, but I donā€™t think the current model is causing problems due to speed in any other ways. I donā€™t believe an alternate model will be meaningfully quicker, except by taking advantage of incremental compilation. I donā€™t think that the current model is particularly susceptible to bugs - there are of course plenty, but it is software and it is tackling a problem with some intrinsic difficulties. I think any other proposed model would be equally susceptible to bugs, just different ones. I donā€™t see any engineering benefits (better abstraction, less code duplication or specialisation, etc.) by implementing the RLS at a lower level of abstraction.

Solutions

Use a long-running compiler for everything

The RLS would still work with Cargo and call the compiler when it wants to build. The compiler however would maintain its state between runs and the RLS would query the compiler instead of (or in addition to) processing save-analysis data. The compiler is called in batch mode via Cargo for dependent crates.

The extreme version of this solution is that the RLS does not maintain any data of its own, and we query the compiler for all requests. The moderate version is that the RLS keeps its data for things like ā€˜goto defā€™ and ā€˜find all refsā€™, but queries the live compiler for code completion and other requests.

To answer queries, the compiler has an API (similar to the non-dumping version of save-analysis, but this could be improved, in any case it is orthogonal to the architecture), internally the compiler looks up data in its internal data structures and does any necessary processing online. For ā€˜find all refsā€™-style queries, the compiler could either cross-reference its data when running in ā€˜RLS-modeā€™ or cross-reference on the fly. For answering queries that include dependent crates, the compiler would have data via the crate metadata (this would require making metadata more detailed to include all function body information for dependent crates).

Since the compiler is taking on some of the role of the RLS, this would mean the save-analysis layer in the compiler (or equivalent) gets larger and more complex.

A big change here is being able to run the compiler in a long-running mode, rather than a batch mode.

Advantages

  • One source of truth for data.
  • Fast for most queries because no pre-processing.
  • Possibly a good design for long term evolution - closer to the Roslin model.

Issues

  • Pretty much rewriting the RLS and save-analysis from scratch.
  • Requires extensive changes to the compilation model.
  • Forces the compiler to keep more data for queries that need cross-referencing (e.g., ā€˜find all refsā€™). We probably donā€™t want to do this work if we are not compiling in RLS-mode. (Alternatively, do this on demand, but that makes queries slower).
  • Requires more metadata for dep crates.
  • Compiler gets bigger and more complex, poor separation of concerns.
  • Unclear if we can keep answering queries when rebuilding (but perhaps compilation can be quick enough that this is not a problem).
  • Weaker abstraction boundary between compiler and tools - expect more breakage.

Incremental update of RLS data

The compilation/RLS model remains mostly the same. We can update the RLS after an incremental compilation with data at sub-crate granularity.

We would need to be able to generate save-analysis data for a single item (e.g., function) at a time and for rls-analysis to update itā€™s data incrementally with such updates. This would require some work, but I donā€™t foresee any major difficulties

Advantages

  • A small, incremental change to let us take advantage of incremental compilation.
  • No changes to the compiler.

Issues

  • Only solves the incremental compilation problem, not the ā€˜onlineā€™ querying problem.
  • Still two sources of data.

Query a null-incremental run of the compiler

Most of the compilation/RLS model remains the same. When we need to run an ā€˜onlineā€™ query (e.g., code completion) we run the compiler again in batch incremental mode. There are no changes since the last run, so compilation should be very fast (I hope we could special case enough that it becomes practically instant), but the compiler loads all incremental compilation data (possibly lazily?). The compiler can then answer queries to the RLS based on this data.

Advantages

  • Small, incremental change.

Issues

  • Only solves the ā€˜onlineā€™ query problem, not the incremental compilation problem.
  • The RLS and compiler have two different modes for communication.
  • May not be fast enough.
3 Likes

First, if we want to do autocomplete we need to be able to do ā€œon-demandā€ type checking (which needs someone to write it), but that pretty much requires a ā€œwarmā€ compiler.

The old CSV-based save-analysis has the problem that it canā€™t decide whether it is a ā€œlow-levelā€ or ā€œhigh-levelā€ interface, and requires some understanding of Rust in order to apply. The new JSON-based save-analysis is good as far as it is a ā€œdef/refā€ interface (although we need to figure out the ā€œlocal var is a def idā€ thing), but its story for types is imperfect (cough retokenise_span cough).

@nikomatsakis asked me to chime in here with some thoughts from the rustdoc perspective.

This quote provides a nice jumping-off point:

In terms of what kinds of queries Rustdoc would want to make, I think this is the biggest difference. That is, rustdoc generally is not a long-running service, at least not today. And even if it was, it would be more of a "refresh the state of the world" situation than an IDE-type situation.

Basically, Rustdoc is more concerned about the entire state of the world; that is, the basic process is: please give me everything that exists, and then I will render it in some fashion. "What is every trait that's defined?", then looping through each one and getting its particular information. The latter is much more close to the IDE use case, that is, when rendering the docs for a particular type, you'd want to query stuff about it. But we also need the broad state of the world generally.

... I thought I had more to say, but I guess that's really it at the moment.

I think "fully incremental on-demand compilation" (aka "change a signature, propagate errors in a zigzag pattern") is somewhat orthogonal to all of this. But having an "rls-single" alternate driver (or even linking rls against the rustc libs, not that is a good idea because rustc really wants a supervisor) that answers rls queries on stdio would not be such a change from the current compilation model.

Maybe I don't work with big enough projects, but traversing all the refs in the world shouldn't take more than a few seconds, which is my expectation for a "find all uses of def" query - maintaining a cross-reference through code changes will probably take more time than it saves. For highlighting locals we can just traverse all refs within the current item.

I think the current "RLS" abstraction boundary is ill-defined - what is exactly a ref? There's no good definition, so compiler refactorings mix things around and cause random breakage. IDE "elements" need to be a first-class concern and not "half piggyback on compiler internals".

3 Likes

Thanks @nrc for that great write-up. I want to try and focus the conversation a bit.

First off, I think that whichever way we select, there is a certain amount of work in common. For example, we clearly are going to want to run the compiler in incremental mode. This is less important now, but I expect to be having us doing incremental type checking in the next few months, so that wonā€™t be true for long. And weā€™re only going to get better at it.

This actually doesnā€™t demand much of the RLS. At the moment, it has to supply the compiler with a good directory to use for its temporary state. I presume that we already avoid invoking the compiler in parallel with itself, so really thatā€™s the only thing that has to happen. The compiler will otherwise ā€œfigure it outā€.

Right now, that means that the compiler will serialize/deserialize its state from the disk. Another relevant thing is that I would like the RLS to become the ā€œrepositoryā€ for incremental state. Much as we supply the compiler with a ā€œvirtual file systemā€ for its inputs, I think we can make the ā€œstorageā€ for the incremental state something that can be swapped in and out. Ideally, we would be able to completely avoid serializing and deserializing state at all in between compilation sessions (though that should always be an option; e.g. when the ā€œcurrent crateā€ changes).

So Iā€™m just going to assume that we are using the compiler in incremental mode. Given that, then the question is, how should the RLS figure out the answers for things like ā€œautocompleteā€ or ā€œfind all refsā€. It seems to me that this is not really about save-analysis ā€“ presumably we can all agree with @arielb1 that the save analysis format is ā€œunderspecifiedā€ and likely to change as the compiler gets refactored. But letā€™s just assume for now that we had some kind of format that we were all very happy with (cough e.g., something based on datalog cough :wink:).

Really the question is to choose between two approaches of communication:

  • Compiler produces some data-structure or file summarizing information that the RLS might want
    • This could also be consumed by other tools, such as rustdoc.
  • Compiler manages its own internal data-structure, but responds to some well-thought out queries.

Both of these share a certain amount of work in common. That is, just ā€œusing queriesā€ doesnā€™t magically bring stability or ease of maintenance. After all, if you take a look at rustdoc, it uses tons of queries, but itā€™s still quite mired in the compiler internals. Similarly, just ā€œusing an external fileā€ has the same effect ā€“ updating save-analysis is to some extent an active drag on refactoring. I think no matter which approach we choose, we have to spend some time thinking out what will be a relatively stable interface for identifying nodes, representing types, and so forth. So for now letā€™s assume we have done that.

Itā€™d be interesting to try and enumerate what the criteria are that we should even be using here. This is what I came up with so far:

  • Responsiveness ā€“ how easy is it to ensure very fast response times to various queries, particularly as edits come in?
  • Maintenance ā€“ which will be easier for us to maintain (both on the RLS and the compiler side)?
  • ā€¦

Iā€™ve got to go now, so Iā€™ll leave it at that.

UPDATE: Edited time-frame for skipping typeckā€™ing to be more conservative. =)