As of Rust 1.30, the Rust language now has support for a very expressive feature called procedural macros. Procedural macros are great because they enable Rust to do more, without baking a lot of potentially domain-specific information into the language. For example, automatic serialization of data types and expressive, type-safe abstractions for web servers can both be implemented using procedural macros.
However, while expressive, procedural macros can’t do everything. The
most glaring limitation is that procedural macros are inherently
local: they don’t have access to the context surrounding the code
being modified. So if you were to implement a hypothetical #[task]
function macro, it wouldn’t have the access to the surrounding scope
required to determine which function calls might be to other
tasks. This is pretty much a deal-breaker for implementing something
like Regent on top of Rust.
The use of Regent here is just an example, because I’m the creator of Regent and intimately familiar with it. To be clear, I have no immediate plans to rewrite Regent, and the statements here are my personal opinions and do not represent the views of any past or present employers. However, I think it’s illustrative to think through what it would take to implement Regent in Rust, because Regent is representative of a wide variety of language extensions one might want to implement, and if one could implement Regent, one could probably also implement a large number of other extensions as well.
Why Language Extensions?
First though, why language extensions at all? In my opinion:
-
Language extensions enable Rust to be simpler.
There are a variety of domain-specific things that aren’t really appropriate to add directly to Rust itself, but would be very nice to have. Without language extensions, there would be a constant pressure to keep adding features to Rust in order to satisfy these users. Adding language extensions allows the core Rust language to stay simpler, by allowing these features to live outside the core language. This keeps domain-specific abstractions where they should be: in libraries. It just happens that some libraries provide features that extend the language itself.
-
Language extensions enable better user experience by directly encoding domain semantics.
Ultimately, language extensions are about providing a better experience. This can be seen in a limited way in how the Rocket library allows users to write very expressive code while remaining type safe. But it’s possible to go much further with this: e.g. Ebb is a language extension that enables certain kinds of parallel loops to be automatically run on the GPU, while Regent as mentioned above allows apparently-sequential programs to run on supercomputers with thousands of nodes. These languages simply aren’t capable of being encoded in Rust’s existing procedural macro infrastructure; they require too much context for that to work.
-
Language extensions unlock novel capabilities.
To make a specific example with a language I’m familiar with: Regent is a language with sequential semantics that automatically runs on parallel and distributed supercomputers with performance competitive with MPI and C/C++ (and in some cases CUDA, though this is experimental). Needless to say, this is not something that is easy to implement without extensions in an existing general purpose language. Language extensions make it possible to explore novel techniques that have the potential to radically improve on the state of the art in certain fields. Adding language extensions to Rust would make these approaches accessible to Rust users without requiring aggressive changes to the Rust compiler with every new feature.
The bottom line is that language extensions expand on Rust’s promises by growing the set of things that can be done in the language while preserving type guarantees and enabling optimizations that are beyond anything one could reasonable expect to see from a general-purpose language; and they do so while incurring relatively minimal cost to the core Rust language and infrastructure.
But What About Language Feature X?
I’ve hinted at this above, but let me be fully explicit: there is no combination of existing or planned language features for Rust which can accommodate these language extensions I have in mind, either as procedural macros or libraries. For example, Regent is (very, very) mildly dependently typed. A number of language features and key optimizations, which are critical to Regent’s ability to scale to thousands of nodes, depend on Regent’s ability to perform analysis based on its expressive type system.
I do not think it would be prudent for Rust to add these features in order to enable Regent to be encoded directly in Rust, because the additional features would add a burden to all Rust users, but would only benefit those using this specific language extension.
Why Not Fork the Compiler?
In the past, one proposed alternative was to effectively fork the Rust compiler, and add your language extension directly to Rust itself. This is undesirable for two reasons:
-
The cost of adding language extensions is high.
This is, for better or worse, simply a matter of life for any mature compiler infrastructure. Compilers are big and complicated, and if the recommended way to add language extensions is to fork and modify an existing compiler, there is a lot of complexity (both inherent and accidental) that comes along with that. While it may seem like an appealing approach because it allows deep integration with the host language, in my experience it isn’t worth it in the end.
Practically speaking, I’ve seen this approach tried at least three times as extensions to C/C++, and in all three cases I believe the choice of extending a C/C++ compiler was ultimately considered to be a net burden on the project. In the case of one unpublished project, the project effectively never got off the ground because of the complexity of dealing with the Clang C++ infrastructure. In another case, the complexity inherited through the C++ compiler was one of the nails in the coffin that eventually lead to the project being abandoned, because making changes to the language became too difficult. In the last case, as far as I know the language extension was successfully implemented, but keeping up with Clang changes continues to be a large, ongoing burden.
(I am not comfortable mentioning the specific project names in public, but would be happy to discuss in private if anyone has more questions.)
It’s worth noting that it doesn’t have to be this way! As a counterexample, I would submit Terra, a language designed for meta-programming that has very strong support for language extensions. I’ll get into more detail about why Terra is great for language extensions below, but for now I would simply make the comment that of all the compiler infrastructures I’ve ever worked with, Terra has been by far the fastest and easiest to get started with. At the very least, I think it’s worth learning from Terra’s approach.
-
Language extensions can’t interoperate if everyone forks the compiler.
One of the best parts of having first-class support for language extensions is that it allows language extensions to interoperate. For example, it’s theoretically possible to use Ebb, Regent, Opt and Darkroom in the same program. Why? Because these are all language extensions implemented in the Terra programming language. Rust could serve a similar role if it also had first-class language extensions.
Why Rust and Not Another Language?
At this point you might be wondering why we should bother with Rust at all, if these other languages already exist and work adequately.
In my opinion, Rust is a good host for these languages because it has the right balance of simplicity, type system expressivity, minimal runtime (no GC or heavy VM infrastructure), and interoperability with the native system infrastructure (i.e. the C programming language). These benefits have been widely discussed so I won’t belabor the point here.
As a bonus, Rust also has a vibrant user community, which makes me relatively comfortable investing in Rust infrastructure for the long term. A language is a large investment, so any would-be language extension creator has to carefully weight the risks that come along with the host language. For example, because Regent uses Terra, and Terra uses LLVM, one pain point for us has been the maintenance burden imposed by LLVM’s refusal to support a stable API. (LLVM’s C API is stable-ish, but doesn’t support what Terra needs to get the job done.) Rust also uses LLVM, but the advantage here is that because Rust’s developer community is larger, hopefully the cost of upgrading LLVM can be amortized across the more users.
Also, speaking as someone who has developed what is probably the largest Terra language extension ever, while I meant what I said about Terra being the easiest to get started with, there are also pain points as well. For some of these, it’s not obvious how one would even address these within the Terra infrastructure. Therefore, my recommendation would be to view Terra as an opportunity for learning lessons, and not so much as a direct competitor. In my view, Terra and Rust occupy different niches anyway.
Lessons Learned from Terra
First of all, it may be worth reviewing what Terra is, in order to ground the following discussion.
At a basic level, Terra should be viewed as a way to meta-program a C-like language from Lua. This is probably easiest to see if we walk through a basic example.
local a = 5
terra add5(x : int)
return x + a
end
print(add5(3))
The top level of a Terra program is basically a Lua script. When you
run this file, it’s literally running in the Lua interpreter. The
first line creates a Lua variable, named a
, set to the Lua value
5
.
The second line starts a Terra function. This is still executing in
the Lua interpreter! Think of the script execution as being like
compile time in a language like Rust. So when we “execute” the second
line in Lua, we’re compiling the function in Terra. In this case we’re
defining a function with one parameter. Any unbound references inside
the function are looked up in the enclosing Lua scope. In this case,
a
is a Lua variable at the file level scope. Lua variables are
spliced into the Terra function body before the function is
compiled. So the body of the function is effectively equivalent to
writing return x + 5
. Note that this means that Lua variables are
effectively read-only from Terra; if you wrote a = 3
in Terra that
would expand to 5 = 3
, which would result in a compiler error.
After the end
keyword we return to normal Lua execution. At this
point, add5
is basically a global Lua variable that happens to hold
a Terra function object. We can call that function, as in the last
line, which results in the code being JIT’d with LLVM, or we could
call terralib.saveobj
and dump a .o
that we could link into
another program.
Terra’s support for language extensions basically amounts to two features:
-
Terra’s parser can be augmented with arbitrary keywords.
For example, Regent adds a
task
keyword, which can occur anywhere a Lua statement can start. When Terra sees this keyword, it basically hands the lexer over temporarily to the Regent compiler (which is simply a Lua function). Regent implements everything from the parser on up. When Regent is done, it hands the lexer back, and returns a Lua object representing the task that has been defined. Then control returns to Lua just like when defining a Terra function. -
Terra language extensions have full access to Lua, and Terra’s Lua APIs and language features.
This is surprisingly powerful. For example, Terra functions are Lua objects, that just happen to support a certain API. If Regent returns an object that supports the same API, it can be used interchangeably. The same with Terra types.
This also means that Terra objects (functions, types) can be introspected. Regent can even invoke the Terra compiler if it wants to. This feature ends of being extremely useful, because it makes it unnecessary to e.g. replicate parts of the Terra type system. If there is an aspect of Terra’s type system which is non-trivial to reimplement by hand, one can simply invoke the Terra compiler to do the heavy lifting. So for example, if you want to check that a type cast is valid in Terra, one can simply do:
function can_cast(from_type, to_type) local function helper() local terra f(x : from_type) return to_type(x) end f:compile() end return pcall(helper) end
And then you can use
can_cast(S, T)
to check if a cast is valid between any typesS
andT
. The cool part is that the code in the body of the internal functionf
can be arbitrary Terra code, so the same basic strategy can be used to type check any sort of Terra statement or expression. Compare this to the amount of work you’d have to do if you were simply writing this usingrustc
or Clang internal APIs!Similarly, when doing code generation, Terra language extensions can simply use Terra’s support for meta-programming to build up the desired Terra code. Writing a code generator as a series of quasi-quoted expressions is much, much easier than generating LLVM IR by hand.
These two features combine to make writing language extensions quick and surprisingly fun, especially compared to trying to hack something on top of traditional compiler infrastructures. It really is the fastest way to hack something up that I’ve seen to date.
Having said that, I’ve also got a long laundry list of issues I’ve got with Terra. Not all of these are Terra’s fault, and some are inherited from Terra’s dependencies, but I think it’s still illustrative to learn from.
The first couple of issues are implementation issues which I believe would pretty much just go away if one were to use Rust. The main exception is CUDA support, which would require more thought if Rust wanted to make it a first-class target.
-
Mis-features inherited from Lua.
While in some ways, a minimal, dynamic language is exactly what you want for quickly prototyping a language compiler, Lua also has a number of features which I consider to be flaws and which ultimately end up hurting more than they help in the long run, especially as language extensions become more sophisticated and maintenance becomes more important. These include:
- Permissive function parameter checking. Lua does not check that the number of arguments matches the number of formal parameters, so it’s possible to get too many, or too few. Unlike Lua’s permissive global variable checking, I’m not aware of any way to fix this without adding manual checks at every call site. In practice, I don’t know anyone who actually writes adequate checking code at the entry points of their functions, so errors tend to propagate far down the line before something gets messed up badly enough to cause the compiler to actually halt.
- Zero- vs one-based indexing. Lua uses one-based indexing. This might be fine in isolation, but Terra code regularly has to interact with C code which uses zero-based indexing. Terra chooses to align with C here, but either choice would result in a certain amount of cognitive dissonance.
- No support for user-defined hash functions.
- No built-in support for pretty-printing basic data
structures. Everything in Lua is an object, including basic
data structures like lists and maps, so there is no way to
determine a-priori how an object should be printed, and if you
try to implement a custom
tostring
that fixes this, it needs to be supported by all transitive dependencies or you just end up with the same problem but at another level in your dependency tree. - Very small standard library, and no official package ecosystem. In practice, dependencies get vendored or simply reimplemented locally because the package ecosystems that do exist for Lua don’t support Terra.
- Mandatory tail-call elision means stack frames disappear from backtraces. There is no way to work around this except to modify the source code. This is especially infuriating when your function is essentially a large dispatch table over AST node types or similar… there is basically no way to figure out which branch you went down except by printf debugging.
- Poor quality built-in debugger. Maybe I’m just not smart enough, but I still have not figured out how to use Lua’s debugger effectively, so I end up doing a lot of printf debugging.
- Backwards-incompatible language changes. Lua is infamous for breaking the language with every release. Terra uses LuaJIT, which is more stable, but LuaJIT doesn’t support all the architectures Terra needs to run on, e.g. PPC64le for the Summit supercomputer, so in certain cases we’ve been forced to go back to normal Lua, which means portable Terra programs need to use a subset of Lua features.
All of these combine to make maintaining a language extension something of a pain. Most of this could be fixed just by using a nice, statically typed language like Rust.
-
Inherited maintenance burdens.
Not all of these are exposed to Terra users per se, but the Terra community is small enough that to some extent Terra’s maintenance burdens become users’ maintenance burdens.
- LLVM (lacks backwards compatibility, as noted above).
- CUDA’s NVVM can be extremely picky about what LLVM versions it works with. In practice this forces Terra to support a larger set of LLVM versions than would otherwise be required, because users on supercomputers don’t generally get to choose what CUDA version they use.
- LuaJIT is stable, but doesn’t necessarily support all architectures that Terra needs to run on, e.g. PPC64le for the Summit supercomputer.
The remaining issues are more fundamental to the approach of using a dynamic language as a meta-programming layer. It is honestly not obvious to me if these can be fixed in Terra:
-
Inability to statically analyze code.
Terra compilation involves executing an arbitrary Lua script, which may even add novel syntax via the
import
keyword. This makes it pretty much impossible to statically analyze Terra programs. This also has knock-on effects all sorts of other things you’d like to do with code, e.g. refactoring tools, find-definition, linters, code formatters. This affects not just the downstream users of Regent, etc., it affects the developers of language extensions as well. -
Language extension compile time.
One challenge in developing a high-quality language extension is maintaining fast compile times. Because the compiler itself is written in Lua, and isn’t the sort of code that necessarily JITs well, it is easy for this code to be slow.
Optimizations that would improve compile times, like incremental compilation, are challenging to implement. If your language extension allows calls to arbitrary Terra functions, then hashing a function to determine if it has changed means hashing not only the language extension’s own AST, but also the Terra ASTs of any functions it calls. And the Terra AST can reference Terra types, which can be arbitrary Lua objects potentially defined dynamically based on program inputs… The problem gets messy really quickly, and the only way I’ve found to reduce this complexity is to cache the code at the LLVM IR level immediately before running LLVM optimizations on it. However, this misses a lot of potential speedups that could be had by avoiding rerunning the language extension (and Terra) compilers.
Some of these would also (I hope) go away by using a more traditional static compiler infrastructure, and writing language extensions in Rust would make it easier to ensure those extensions are fast and high-quality.
Thoughts on Language Extension Support in Rust
I’ve got fewer ideas here, but I’ll list what comes to mind.
One thing that is clear to me from my experience with Terra is that
writing a language extension as a meta-program is by far the easiest
way to go. Rust is part of the way here with the syn
crate, but
syn
(and Rust procedural macros in general) is not hygienic and does
not even attempt to deal with Rust at a semantic level. It has been a
while since I’ve worked on the Rust compiler, so I’m not sure what
specifically to suggest, but one thing I am sure about is that mucking
around with rustc
internal APIs does not sound like fun to me. I
understand why for engineering reasons rustc
is divided into
different stages, but it prevents the sort of “just type check this
statement” pattern that you can do so easily in Terra.
My recommendation would be not to expose rustc
internal APIs (which
I expect the rustc
team does not want to commit to stabilizing
anyway), but to think about higher-level APIs to accommodate making
effective language extensions. I’m not quite sure what that would look
like, but I’d recommend making it more along the lines of “type check
this token stream as an anonymous function in the current module”
rather than exposing individual compiler passes like name
resolution. Similarly, while clearly some compiler representations
will need to be exposed, I’d recommend limiting this as much as
possible; probably types have to be exposed at some level, but
e.g. MIR probably does not (and possibly not even HIR, as it may be
sufficient to talk about code input and output as token streams with
ASTs living in syn
or similar).
It’s important to think up front about how types and functions will be extended in the various language extensions. For example, in Regent, tasks are Lua objects that mostly conform to the Terra function API, but expose additional methods (e.g. you can look up the CUDA variant of a task, if it exists). Similarly, Regent types are just Terra structs, but have additional metadata associated which Regent uses during type checking.
In the design of any major feature, it is helpful to keep potential users in mind from the beginning. For this purposes I’d recommend using existing Terra language extensions as examples of the sorts of things that language extensions would like to do. The various domain-specific languages associated with the Delite project are also good to keep in mind. Here is a non-exhaustive list of ones I’m aware of:
Conclusion
Language extensions enable better usability and, in some cases, aggressive optimizations beyond what any general-purpose compiler can be expected to accomplish. An infrastructure for language extensions in Rust could position Rust to be a leader in high-performance, high-productivity programming in a variety of usecases.