For example, automatically generating the intrinsic -> signature table in rustc_platform_intrinsics
.
I think we are coming from different points. Compile-time dynamic allocation is equivalent to generating an array of a size unknown in parse-time (which is what you referred to with [T; n]
). const_eval
is not a form of code generation - it uses exactly the same types as the outer Rust, so it can't really generate arrays of non-fixed sizes, with Rust not being dependently typed and all of that.
Types are allowed to depend on macros in a very flexible way. Allowing macros to depend on (target) types in a similarly flexible way will create a very annoying circular dependency.
You can see that the papers you linked don't allow compile-time code to declare types.
Actually, it might be the case that we don't precisely understand one another.
First, const-eval can run in the middle of type-checking. When we get integer generics, there will be type-checking -> const-eval -> type-checking cycles. It is not separable from it.
However, in most cases what you want is a kind of a "monomorphization hack": to run after types are inferred, and execute code that generates code for a function/static according to these types. I don't see that much theoretical problem with that idea (especially now that we have specialization).
Still, if that code runs on the host, it will have to deal with both host and target types. Because of the complexity of integrating typed AST with Rust, it will probably have to generate an untyped AST, which all the annoyances that implies. There's a good chance that the final design for MIR plugins will allow you to do this.
Of course, your other point is that you want to have plugins in the same source-code as the code you compile. I don't like this, because plugins run in a different execution environment from normal code. Basically the entire pipeline of rustc has to run twice (or more) - once for the plugins, once for the target. If you are doing this, why not move the handling to a higher level?