This last week we had the rustc compiler team design sprint. This was our second rustc compiler team sprint; the first one (last year) we simply worked on pushing various projects over the finish line (for example, in an epic effort, arielb1 completed dynamic drop during that sprint).
This sprint was different: we had the goal of talking over many of the big design challenges that we’d like to tackle in the upcoming year and making sure that the compiler team was roughly on board with the best way to implement them.
I or others will be trying to write up many of the details in various forums, either on this blog or perhaps on internals etc, but I thought it’d be fun to start with a quick post that describes the overall topics of discussion. For each one, I’ll give a quick summary and, where possible, point you at the minutes and notes that we took.
The internals link from the blog is broken
Glad to see the on-demand compilation!
For better IDE support, it is desirable to be able to compile just what is needed to type-check a particular function
A real granularity of IDE is file: you want to know everything about the file on the screen (not only errors, but also name resolution for all identifiers and types of expressions). If you do analysis yourself, you also care about high level structure (items without bodies) of other files, but this hopefully could be avoided with the help of the language server.
However, finer-level granularity may be useful for implementation of completion. One problem with completion is that if you have typed
foo., you want to see completion, but actually analysing this code is tricky because it is invalid. To solve this problem, you insert a dummy identifier at the caret position, thus making the file at least partially syntactically correct (that is, correct enough to understand the syntactic role of the identifier at the caret position), and run the analysis on that modified file.
Question: what would be the key for the query “does the function foo type-check”. How is
foo identified? Will it be some kind of id, which is only available after you parse and macro expand the whole crate? Are there any plans to make the parsing incremental? The simplest thing to do is to cache lexical analysis results, because they don’t depend on anything except the file contents.