Pre-RFC: procmacros implemented in wasm

Procmacros are shared objects containing arbitrary code which run in rustc's address space. This has two problems:

  1. There's no guarantee that a procmacro is deterministic. Many perform completely deterministic transformations, such as serde. However there's nothing which prevents a procmacro from accessing arbitrary files or talking to network services. This makes them non-deterministic from a build system point of view, because they have dependencies which aren't visible to or taken into account by the build system. And there's no general way two distinguish deterministic from non-deterministic.

  2. Worse, a procmacro could be actually malicious, and set out to do bad things. It could make unrelated changes to the filesystem, or even manipulate rustc's internal state. I don't think this is a present threat, but it could definitely happen - similar things have happened in other language ecosystems.

(Yes, these both apply to build scripts, but that's a conversation for another day.)

These two points came up in various forms in various discussions over the course of RustConf. Various solutions were suggested: supplying procmacros with a stubbed out libstd, prohibit unsafe in procmacros, hoping for the best.

But then during @linclark's closing keynote it struck me: we could compile procmacros to WebAssembly, and run them in rustc in that form. Procmacros have a very narrow API, so it would be easy to serialize between the rustc and wasm environments. Within wasm their code would have no access to external state, and so would be forced to be completely deterministic. Even if malicious, they couldn't do anything other than attack rustc through the very narrow API. And procmacros aren't very performance sensitive - their effect on compile time is mostly from what they generate rather than while they're generating it.

When talking with @alexcrichton and @dtolnay, they both mentioned another benefit: we could put prebuilt procmacros on and distribute them directly. Apparently procmacro build time is a considerable pain point, which this would solve in one sweep.

I've made an initial attempt on the cargo side with I'm going to start exploring rustc next to see what needs to happen there. I'm excited because its my first real change to cargo, the most substantial change to rustc, and also the first time I've done anything with either procmacro internals or web assembly.

The main question in my mind is how many procmacros can actually run in this environment? Non-deterministic ones are explicitly excluded, and I'm pretty sure staples like serde will be fine. But are any procmacros using threads or rayon? In principle that would be fine, but as I understand it, wasm doesn't yet support threads. Are there any other things I'm overlooking? Are there any procmacros where wasm performance would be an issue?

(I want to get to a prototype stage soon to answer these questions directly, but I thought I'd ask.)


Heh, that would be an interesting shortcut to binary distribution on

I wouldn't mind it at all, but personally I do think the long-term goal should be for to cache binaries for all targets, so that all dependencies can benefit from it rather than only proc macros.


But are any procmacros using threads or rayon ? In principle that would be fine, but as I understand it, wasm doesn't yet support threads.

It would be extremely unusual for a proc macro to use threads because none of the proc_macro API types implement Send or Sync.


Though the obvious attack is to inject malicious code into the program being compiled, which is still pretty powerful/scary.


Askama is a somewhat popular user of proc macros that reads text files from the file system within the crate root. It might be able to get by if there was something like the include_* macros that worked in attributes.

1 Like

I'd probably skip that as a sandboxable candidate for the first pass. To handle file IO, I'd propose extending the procmacro API. This would allow rustc to mediate (read only) filesystem access so it can control what paths are used, etc.


Yes yes yes please!

This will make the life of IDE writers so much easier, because it would be possible to run proc-macro in process with strict isolation and time budgeting.


But file input and output in addition to network I/O can be valid actions. For example some kind of analogue of cbindgen with procmacro. You annotate functions with

fn func(x: CStruct)

and as result it converted not just to extern "C" fn func(x: CStruct), but also c header file is generated outside of source tree, plus because of CStruct is may be defined in some other crate it potentially need network I/O to fill cargo cache. Not this kind of stuff happens in scripts, but code generation during run of is not very convenient.

Like any code you trust your system with. I never understood this kind of special concern about (proc-)macros: how exactly is it worse to run arbitrary malicious code from a function that is invoked from a proc-macro than a function that is invoked normally? If you run any code on your system, it has the potential of doing (almost) arbitrary damage – and if you don't trust a crate, you shouldn't use it anyway.


perhaps this + sandboxing tests = secure CI servers?

assumption is sanboxing tests is easer than rustc

I'll repeat this point:

There's two valid cases for impurity in proc-macro that I can directly think of: file input for extra data (e.g. Pest reads an external grammar file) and file output (e.g. generated header fragments or such).

File input is super simple to solve: just allow the proc-macro to ask rustc to read in a file. Bonus points if we can get span information for it and tokenize it. But make sure you don't preclude use of binary (not utf8) files.

Even just giving proc-macros a way to resolve macros would require solving this for include!.

File output is harder to principlize. But it's solvable, and I think another big step down in usage from file input. Probably, a solution would again be just to instruct rustc that you want to create a file with x contents, and then let rustc do the file creation. That would allow rustc to control where output goes as well.


I strongly support this undertaking.

In IntelliJ-Rust, we can (in some experimental/internal versions) work with proc macros. For this, we use proc-macro-expander that is a command-line tool that links with compiled proc macro dlls and invoke them.

But we have some problems with such an approach. I think all of them can be soled with some kind of sandboxing (including WASM).

  1. In general, we don't know when to re-expand a macro because we can't track its dependencies. Proc macro can read FS or access network or do more weird stuff (I heard about macros that generated random numbers at compile time?), so we should know e.g. what file it depends on to re-expand it when such file is changed. I think there is the same issue in rustc and it's a reason why proc macros are not incremental (I'm right?). WASM can solve this problem by providing a specific API for things that a macro can access. E.g. if it can access FS, it should use a provided FS API, not system API. Then we can provide such API and so intercept access to FS.
  2. Proc macros can be malicious. This applies to IDEs more than to a compiler, because in IDEs we need to expand macros to provide code analysis (navigation/completion/etc), so it should be done when opening a project. Agree, it's not too wise to execute a user code got from (possibly) unknown source when you just want to read it in an editor. WASM can only use provided API, so it can be malicious only if the API allows it.
  3. It's hard to limit an execution time of a macro. We have to do it in IDE because in IDE it's usual that a code can be invalid/buggy. In a case when a compiler can just hung, IDE must recover. We could execute each proc macro in a separate process and just kill it in a timeout, but it is too slow: we want to execute all macros in one process in parallel. So we should invent some kind of thread interrupting. WASM can be interpreted, so we can precise control its execution, e.g. we can count every executed instruction limit the maximum quantity per macro invokation.

Extending the API to allow proc macros to nicely declare their inputs and outputs sounds like a good idea.

In case of a WASM + WASI route, it could theoretically be automatically capture from any I/O requests (and ensure the access is limited to relevant directories), but a dedicated API could done it better, more declaratively.


Having an explicit API for all I/O in proc macros would be a huge win for dependency tracking, and if it makes it possible for things to be marginally more secured and even precompiled, all the better!


Does this mean we'll need a webassembly runtime to expand these procedural macros?

1 Like

This could be explored further, like cargo printing warnings about where certain code is being given write access, or requiring the top crate being built to specifically allow certain paths etc...

1 Like

Is there any reason that this would not be better implemented using Miri instead of WASM? Can MIRI be sandboxed?


It is completely sandboxed by default, even with sandboxing disabled, the only non-deterministic part is getting a random value from the OS. Apart from not being able to read anything from the OS, miri is simply too slow (at least 10x slower than native/wasm due to many checks being performed during execution and no jit) for this purpose.


Isn't miri currently used for constant evaluation?

Does that imply that constant evaluation is 10x slower than it could be?

Not necessarily. The 10x is comparing compiled code to interpreted code, so if you were comparing cost for constant eval you'd have to factor in the time spent in LLVM (actually, probably in its JIT mode rather than ahead of time).

For proc-macros, you can compile once to run multiple times, so the compile time amortizes out, especially if we can distribute precompiled wasm.