TL;DR: Only supporting memory allocation is the ultimate trade-off.
First of all, thanks for a very complete answer. I am afraid that you are underestimating drastically the difficulty in implementing a full VM and at the same time overestimating the effort in user’s education, and hopefully by the end of this answer I’ll have explained what makes me think so and why I think that only memory allocation is the sweet point to aim for. If anything is unclear, please call me out on it.
I will start by answering a completely different point first, though:
BTW, regarding your specific demarcation line, there are valid use cases
for I/O, multi-threading, sys-calls, etc at compile time and these can
already be used in the context of “procedural macros” (although with an
unstable and not ergonomic enough way).
Of course there are usecases for those, I never said there was not. Any demarcation line is, by nature, a trade-off, and I argue that implementing memory allocation gives a huge return on investment while all others afterward (I/O, multi-threading, C-calls, sys-calls, inline assembly, …) have seriously diminishing returns and therefore are probably not worth the complexity.
I think you fail to appreciate what memory allocation unlocks.
You can have today (and in the future) a plugin which validates your queries against the database schema or an xml schema, however said plugin can only emit the resulting object as a “const” item if the type supports a “const” constructor:
- today, the number of “const fn” is very limited, because they only support simple structs and integers
- with memory allocation allowed, it’s unlimited1!
Thus, by simply allowing memory allocation, not only do we cater to maybe 99% or even 99.9% of usecases, but we also lift all limits on the emitted
const items a plugin might generate for the non-covered cases. That is very empowering.
1 key point: "no life before
main", we’ll come to it later.
Now, my ideal solution is complete separation of concerns. An ideal
regex! macro for example, should be a quick one-liner - just build the
regex by calling the regular regex construction function, only inside a
Note: there is no need for a macro then; it’s just a
const fn as we already have.
Unfortunately, building a complete VM for a systems programming language for which you want cross-compiling is Madness:
- you need to emulate calls into C; note that you will NOT be able to load the C library you are calling for the target you cross-compile to because your current computer is incompatible and there is no reason to think that the equivalent C library that you have on your own CPU/OS will in any way produce the same result (
sizeof differs, for example)
- you need to emulate the underlying OS calls, for all OS to which you need to cross-compile
- you need to emulate the CPU behavior for all iniline assembly for all CPUs to which you need to cross-compile
Note: I do not even consider NOT supporting cross-compiling. It’s just necessary for small embedded devices on which rustc cannot run, and forbidding those devices from benefitting from const-evaluation would seriously hamper the penetration of Rust in this field it suits so well.
I can only think of two ways to cover those requirements, at the moment:
- DRY: use a full VM, which emulates both the target CPU and OS. It should be integrated into the compiler so that the compiler will (a) feed it the emitted assembly, (b) setup the memory to represent the arguments and © be able after the computation to read the memory the get the result out (which it reinterprets in terms of a Rust value)
- Not DRY: implement a semi-VM. All Rust functions containing inline assembly or C calls cannot be called directly, instead the semi-VM comes with one “plugin” per such Rust functions which re-implements it in a VM-way (that is, it changes the VM internals appropriately to emulate the effect of the call, taking into account the arguments’ values and the current target CPU/OS). A mechanism is provided for developers to implement such plugins for each and every Rust function they write which either call C or use inline assembly and deliver said plugins with their libraries.
(2) here is my best attempt at solving the issue, and it does not seem that great:
- if you force developers to implement the plugins, all those doing FFI will hate you; it’ll be a pain!
- if a developer can leave off the plugin for some of the functions, then you are back to only have a subset of the library be usable in const-evaluated contexts, and by transitivity it might very well be a huge missing subset
- for each plugin, there’s a chance its behavior differs from that of the function it emulates; this is a MASSIVE amount of duplication in user land
And yet it’s my best attempt, because honestly if you ever think (or someone thinks) that you can pull off (1) then I would really, really, like to know what strategy you are thinking about.
So, I’ll be fully honest with you:
- A full VM is such a titanic effort, as far as I can see, that I consider it impossible to pull off
- A semi-VM is, much like a partial interpreter, … partial, which needs to be dealt with2
Of course, if you have a much better idea to implement a full or semi VM solution, I am all ears.
2 key point: API stability, we’ll come to it soon.
Partial solution as you suggested (i.e only allocation):
Still complicates logic as before,
First of all “complicate the logic” is not an all or nothing, it’s gradual. “Just allocation” is probably so much simpler than a full VM than it does not make sense to compare their implementation costs. I dread to check the code base size of VirtualBox or VMWare.
but now also adds complexity on the end-user that needs to be aware of an additional computation model.
Let’s put aside for a second the concept of the full VM, since it’s so uncertain whether it’s manageable. Any other solution, no matter where its limits lay, will only be able to interpret/execute a subset of the code at compile-time.
For API stability reasons, it would be unreasonable to let the compiler automatically decide whether a function should be callable at compile-time: any change in implementation might be a breaking change in the API! This requires that the developer be able to annotate the subset of functions that she is willing to support for compile-time computation. The good news is that we already have the solution: it’s called
Thus, any provably doable solution today requires the already existing
const fn: the solution chosen only affect how many functions can be
Thus, any probably doable solution today impose the very same burden on the end-user.
And I would argue that the rules are simple enough:
- you cannot initialize a
const item from a non-
- you cannot call a non-
const function within a
- you cannot refer to a non-
const global within a
Those rules are shared with C++'s
constexpr. The only difference is the subset of functions that can be called, but then the API is already different anyway.
Therefore, I simply reject the argument that it pushes more complexity on the user.
Let us go back now to my claim that with ONLY memory allocation/deallocation allowed (which actually involves the full Rust syntax evaluation) the current procedural macros become unlimited.
The key point, as hinted, is "no life before
In light of the Static Initialization Order Fiasco and Static Destruction Order Fiasco known in C++, it was decided that Rust would support no life before
main: no user code is executed before
main starts or after it ends. Having suffered from both issues, I fully support this position and decision.
It is very interesting, however, to note the consequences of this decision on
const items (which are created before
main starts): their initialization should require no code to run before
main, or equivalently
const items can be stored in ROM or
const items cannot contain, transitively, as far as I know:
- a Process ID
- a Thread ID
- a File Handle
Note: while the type could contain an
Option<File>, it would have to be
None in a
const item is fully expressible in a compiler that only supports memory allocation in compile-time evaluation.
And as a result, only supporting memory allocation is sufficient to express the initialization of 100% of the
const items; it’s unlimited!
- any less than memory allocation restricts which items a user can stored in
- any more than memory allocation does not lift any restriction on which items a user can store in
Coupled with the fact that from my personal experience, memory allocation covers 99% of the use cases, I consider only supporting memory allocation to be the ultimate trade-off in terms of implementation cost/user empowerment.
Thanks to those who read everything there, please point out any issue/inconsistency/flaw in my arguments.