About compiler plugins


The possibility for a library writer to write his own compiler plugins is an awesome feature. It’s already used by a lot of libraries, notably docopt, zinc, gl-rs (which by the way may dive straight in AST), etc. However the public API that plugins are using is very… dirty, to say the least, and doesn’t seem to be a blocker for 1.0.

This leads me to a few questions:

  • Is it possible to make changes in the Rust language in a backward-compatible way while being sure not to break existing plugins? Let’s say that post-1.0 you want to add a new language construct in Rust in a backward-compatible way. As far as I know, doing so will modify the AST structs that plugins are supposed to generate, which means that technically it would not really be possible to modify the language in a backward-compatible way at all.

  • Let’s say that you make breaking changes in the plugins API in Rust 1.0. Does this justify the release of Rust 2.0 instead of 1.1?

  • Right now (pre-1.0), does modifying the plugins API require an RFC? @eddyb suggested that the syntax::ext::build::AstBuilder trait should be totally removed, and that only quote_*! macros should be used, which I think is a good idea and would solve question #1. However quotes macros would need to be extended because there are several things not possible right now with them. Do these changes require an RFC?

The situation of plugins is a bit ambiguous, and I think that it wouldn’t harm if it was clarified.


These APIs are all highly experimental and anything using them should be itself regarded as an experiment. Plugins are useful, but it would be crazy to have APIs with backwards-compatibility guarantees at 1.0: it’s not what we’re aiming for with 1.0 (as often stated elsewhere, 1.0 is providing back-compat guarantees about the language and parts of the core libraries, with precise stability attributes covering the libraries).

Our choices are, essentially:

  1. avoid stabilising the APIs at 1.0, and work towards possibly getting a nice, stable API in some later 1.x release.
  2. block 1.0 on a nice, stable plugin API.
  3. stabilise whatever API we have at the 1.0 release date.

3 is clearly madness (we’d end up with something very suboptimal), and 2 could delay the 1.0 release indefinitely, which is not acceptable.


This is not feasible now; quotation has too many problems. The largest is various staging issues mean it is very annoying if they are used in libsyntax itself. (It is also rather slow, due to the round-trip via strings.)


These APIs are all highly experimental and anything using them should be itself regarded as an experiment.

Does that mean that all the librairies that use plugins or that depend on a library that uses plugins are not supposed to be used in a stable environment?

100 % of the gamedev-related work is based on gl-rs. gfx-rs also uses plugins for its shaders and vertex buffer attributes. The Iron web framework uses a makefile to generate some code, which I was planning to rewrite as a plugin because it doesn’t compile on Windows. Even Cargo uses docopt and was planning to encourage pkg_config! for external libraries.

That’s a lot of code.


Yes, and unless you think that 2. or 3. are sensible approaches, I don’t see how anyone can disagree (that is, instability is a direct consequence of the possible choices we have). Remember the 1.0 stable release is just providing a guarantee about the backwards compatibility of the language, it is definitely not “Rust is finished”; maybe some users will need to wait until 1.1 (or 1.3 or 1.10 or whatever) for their favourite feature to hit the stable stream.

Arguing that we have a lot of code using it and thus it must be supported immediately will just leave us with de facto stabilisation of a sub-par API, and, since it’s deep in the compiler, make it hard to change the compiler in future, stagnating language development. E.g. with our current APIs, we would not be able to modify the AST in a backwards compatible way, and so we could not add new syntactic constructs to the language (or change existing ones).

Personally, I would really like the APIs to be stable (I have 14 libraries on Rust CI, 7 of them are plugins/use the compiler APIs, and 1 provides an optional helper macro as a plugin) but the consequences of doing this now would be horrible, and I don’t particularly see why it needs to be part of 1.0.


I wrote RACC, which is a port of Berkeley YACC to Rust. RACC works as a compiler extension. It consumes tokens and produces an AST, directly into the target program’s existing AST. I really, really liked the experience of writing RACC, and I think compiler plugins could be a very powerful feature for Rust.

I view compiler extensions as tightly-coupled extensions of the compiler, which is an implementation of a language, but is separate from the language. I am 100% perfectly fine with the compiler implementation changing rapidly, being unstable, etc. I expect / hope / want that the compiler will evolve at a much faster pace than the language. And that means that I expect, as the maintainer of a compiler extension, that my extension will be broken many times during the early life of Rust, and I expect to rewrite portions of RACC many times, to adapt to the changes to rustc.

I don’t want to lock the compiler into a rigid API, especially not this early in its life.


And I’m working on providing Rust support as an IntelliJ IDEA plugin. This requires me to re-write the language model in Java.

Please, don’t write any compiler extensions because it makes my project impossible.


Compiler plugins are already a thing that are widely used, from macro_rules to derive to custom plugins like compile_msg and other things like glium, gfx-rs, etc. It’s a fight long gone, really. Fortunately, syntax extensions have a well-defined syntactic boundary, and tools like syntex will hopefully allow for pre-processing code to expand syntax extensions in cargo build scripts.

I still think IDEs can be useful in the face of syntax extensions – most code isn’t generated by syntax extensions, and dependencies can be expanded and their information stored…


IntelliJ family of IDEs is kinda special - they can provide perfect refactoring, completion and code navigation if language model is complete. And it also has really awesome error recovery support.

Macroses make it impossible to build complete model. Some simple macroses like vec! I can provide as intrinsincs. I also can add support for quasi-quoting, even with refactorings. But compiler plugins are simply impossible. And I think they are entirely a wrong tool for something like parser generators.

I realize that removing macros support from Rust is impossible by now, but it’d be nice if there was some kind of pushback to discourage their use. Unstable API might be a good way to do it.


It does not seem reasonable to discourage the use of an incredibly powerful part of the language because it will make it more difficult to integrate with one IDE.


It’s powerful in the same sense as pointer arithmetic in C is powerful. It can be cool but it’s also extremely dangerous.

And I personally would prefer a language with perfect IDE support to a language with a rich macro system. But then, I spent many days debugging Scala code littered with implicits and macroses.


You should be hosting Rust in IntelliJ, not essentially porting it to Java. If you aren’t hosting the actual Rust compiler (and thus the compiler extensions), then you’re going to be duplicating a huge amount of the language, and your IDE will always be out of date with respect to the “real” Rust.

No, I’m not going to avoid using one of the most powerful parts of the language because one IDE chose to duplicate the language.


You can’t realistically host Rust in IDEA. There are multiple reasons:

  1. IDEA needs to be able to access the AST of the parsed text. It’s required for features like semantic diffs for version control or structural search&replace.
  2. Compiler must have good error recovery support. Rust currently does not fare that well.
  3. IDEA keeps a lot of indexes and updates them automatically when dependencies change - it’s something akin to perfect incremental compilation (except that IDEA doesn’t do codegen). Rust has to parse everything every time.
  4. Compiler plugins must be FAST. IDEA parses code immediately as you type it, so anything slower than 100-200ms is unacceptable.
  5. It’s easy to write inspections in IDEA, with automatic corrective actions.

Yes, writing a language model means reimplementing the language parser and name resolution (but no codegen). However, I consider this valuable in itself - a second implementation helps to identity under-specified spots in any language.

And I feel that if a language needs macroses then it’s actually a pretty weak language. They are a simple, neat and wrong solution.


I think you underestimate the power of a proper macro system. Type and procedural abstraction aren’t always sufficient, syntactic abstraction can help a lot, and can provide extremely powerful codegen opportunities that would otherwise be infeasible to do statically, require tedius and error-prone manual data entry, or are just plain ugly and verbose. See: html5ever, gfx-rs, derive and serialize/serde, etc.

(Also note: “macro” is singular, “macros” is plural. “macroses” is not a word.)


There are redeeming points for the macro system in Rust:

  • Macros are designed to be clearly identifiable syntactically, so that any parsing tool can choose to treat them as opaque. This might be the only option for parser extensions provided by the crate under development, unless the tool implements some kind of instant build-and-use support.
  • There has been effort made (and it’s still ongoing) to try to make macro content syntax and expansion predictable and future-proof.


It’s very strange that you say that you want to “support” Rust in some IDE, and yet you want to tell the Rust community what parts of the language are good (worth supporting) and bad (not worth supporting).

If your IDE does not support Rust macros, then I’m not even going to consider using it. Macros like debug!, warn!, println! are vital to all work that I have done in Rust. And in most of the work I’ve done in Rust, I’ve used a handful of carefully-written macros to accomplish reasonable goals.

And being able to write syntax extensions is a profoundly powerful capability. If you want to exclude this by design from your IDE, then I really can’t understand why you’re even considering doing this work.


I worked quite a lot with Scheme and its macro system and recently with Scala. It just happened that my job means that I mostly work with other people’s code.

And I do think that in many, many cases code-generating macros can be avoided. An example from gfx-rs:

let var_color = program.find_parameter("color");
program.set_param_vec4(var_color, [0.0, 0.0, 0.0, 1.0]);

This can be solved in some cases by using nameof-like constructs. It’s not applicable to gfx-rs, but macros usage in gfx-rs is also spot-on and very non-evil. They don’t magic-in user accessible fields and are nicely self-contained.

I’m not so sure about html5ever. Imagine for a second that you’d have to deal with match_token.rs but without any comments or documentation.

I understand that many people like macros and I most definitely do not want to tell anybody to stop using them. Just consider, please, that macros can make it very complicated to understand and support your code.


Macros based on quasi-quoting are fine. So there’s no problem at all with panic! or println!.

It’s the tree-generating macros can be very problematic. They can do literally everything, up to and including formatting developer’s hard drive and emailing their private photos to everyone in their contact list.


Your “problematic” is my “awesome”.

They can do literally everything, up to and including formatting developer’s hard drive and emailing their private photos to everyone in their contact list.

The same is true of the compiler itself. It’s just a library of code. This is a really misleading thing to say. You’re trying to find reasons to exclude a powerful language feature because it makes your project more difficult. You chose this difficulty.


It’s not misleading. See e.g. there’s a reason people invented regular expressions – it’s their simplicity in matching. Similarly it’s good if tools can only do things that they’re supposed to do.

With the plugin system we have in place we have an unsandboxed and thus probably very hard to sandbox code that needs to run for compilation. Alternative implementations are will have it hard to replicate this system securely.


It’s 100% true that macros and especially syntax extensions make IDE integration harder, @Cyberax is not being lazy or trying to making things hard for themselves. That is to say, yes, it is an awesome, powerful language feature, but also, yes, it makes tooling hard.

(Please review the code of conduct, particularly point 3: there’s absolutely no need to be so aggressive to someone who is trying to do something awesome for Rust. :smile:)

(That said, I don’t think we should remove the feature, but hopefully users of it can write their syntax extensions so they doesn’t do completely crazy things, to assist with sane IDE integration. IDEs will just have to handle them in a best-effort way. It seems like it would be equivalent to the halting problem to handle them “perfectly”.)