I recently wrote a first draft of a RFC outlining an idea how to enable the compiler to cache proc-macro expansions via the query system in the presence of impure proc-macros. You can find the draft on hackmd.io. Comments are welcome.
- Use a setting in the proc-macro crates
Cargo.toml
to determine whether all proc-macros for a certain crate can be cached or not
- This is not what I said and I further clarified it. The proc-macro would set it in its own
Cargo.toml
and it would only affect itself. - I will bring this up in every thread this comes up because Cargo has an active project goal that also needs to know when a proc-macro will is idempotent. Adding the flag to
Cargo.toml
and using it to pass a compiler flag to rustc would mean that we both benefit from this. This does mean that a proc-macro cannot change whether its cacheable or not (but that state has to be done in a way that we know to invalidate the cache if the state changes). That is a reasonable future expansion but we should make the default path to do this so Cargo can also leverage it otherwise we'll have to do an ecosystem shift in how macros are done which will be difficult. The Cargo way can also be done in a way without requiring and MSRV bump which means proc-macros can take advantage of it much earlier than they would have otherwise.
I cannot see a difference there. The sentence above says
in the proc-macro crates
Cargo.toml
which implies at least for me that it only affects the proc-macros from this crate. If you still feel that there is a subtitle difference in how I expressed your idea: Please suggest a specific edit to the document to make your idea more clear.
It's your good right to bring that up. That written: I do not see how it affects my proposal as this targets a different problem. Yes, sure cargo needs to now about tracked environments and tracked paths, but as those API's are already existing in the compiler (see the existing proc_macro
functions with these names) and as the proposal is mainly around caching proc macro expansions for incremental compilation I would argue that cargo is only tangentially affected at all. Or is cargo informed about any other caching decisions for certain code blocks in the compiler at all?
Ah, I misread. I saw "all proc-macros" and missed "for a certain crate" (which technically that would be "for that crate").
It's your good right to bring that up. That written: I do not see how it affects my proposal as this targets a different problem. Yes, sure cargo needs to now about tracked environments and tracked paths, but as those API's are already existing in the compiler (see the existing
proc_macro
functions with these names) and as the proposal is mainly around caching proc macro expansions for incremental compilation I would argue that cargo is only tangentially affected at all. Or is cargo informed about any other caching decisions for certain code blocks in the compiler at all?
rustc needs to know whether a specific proc-macro invocation is idempotent to allow caching of expansions.
Cargo needs to know whether a proc-macro crate is idempotent to know whether we can cache the resulting rlibs for the proc macro crate's dependents, see User-wide build cache - Rust Project Goals. Sharing the cache between projects requires an extra level of information and confidence.
The underlying need, of knowing idempotence, is the same between the two. By pulling in Cargo, we unblock more caching feature development, avoid ecosystem churn of using one and then switching to the other, and allows faster adoption in the fundamental proc-macros in the ecosystem since an MSRV bump is not needed.
rustc needs to know whether a specific proc-macro invocation is idempotent to allow caching of expansions.
Cargo needs to know whether a proc-macro crate is idempotent to know whether we can cache the resulting rlibs for the proc macro crate's dependents, see User-wide build cache - Rust Project Goals. Sharing the cache between projects requires an extra level of information and confidence.
The underlying need, of knowing idempotence, is the same between the two. By pulling in Cargo, we unblock more caching feature development, avoid ecosystem churn of using one and then switching to the other, and allows faster adoption in the fundamental proc-macros in the ecosystem since an MSRV bump is not needed.
I still do not see why that would require a new Cargo.toml
level config. After all this proposed RFC does not introduce new functionality from the cargo point of view. In my view there are three features in my proposed API extension:
- Tracked environment variables, which is something that rustc already needs to communicate to cargo due to the
std::env!
macro - Tracked files/paths, which is something that rustc already needs to communicate to cargo due to the
std::include_str/bytes!()
macros - Caching of the expanded output: That's something widely different to what you describe above. It does not care that much about caching the rlib, but it concerns the output produced by the proc-macro. Given that cargo does currently not have any insights in how incremental caching is handled for other parts of a crate and that cargo also do not rerun all proc-macros in the dependency tree if you recompile a top level crate I cannot see why cargo would even need that information.
So even with a Cargo.toml
level setting the complexity of the first two points wouldn't go away, while the third point is something that should happen independently from cargo at all, as it mostly concerns compiler internal caching decisions. Given that a Cargo.toml
level setting would also need to be transported to the compiler somehow I cannot see how this would be easier to agree on and easier to implement than the proposed solution.
As for the MSRV bump: I think the proc-macro2 crate should be able to offer a rust version interdependent shim for these functions.They already implement some sort of feature detection.
That all written: I can see that this sounds vaguely similar to what you want to achieve with your project goal, but in detail it's something different. You are of course free to disagree with that, but please write your own proposal then instead of trying to capture a vaguely similar proposal from someone else. (Also I personally would likely prefer if the cargo team concentrates on fixing well known bugs, instead of trying to add even more features, but again that's your decision)
I think something might be missing in the communication. I'm not saying the cargo and rustc caching features are the same. Caching of rlibs is separate from proc-macro expansion. No feature strictly requires Cargo.toml
. Its software; we can do anything.
What I am saying is that the underlying need of knowing the inputs to the proc-macro is the same and that the initial version of the feature should be designed to leverage both needs.
My counter proposal is a bare bones; instead of getting build.rs
-like reporting telling rustc
"rerun if X changed", you instead declare that a proc-macro is idempotent / pure, that it does not read from any other source, whether its environment variables, files, or something else. Both approaches are telling the compiler "yes, trust me". Mine is more restrictive but I suspect that the loss of some macros from having their expansion cached is made up by the fact we can also offer user-wide caching of build artifacts (which can then be expanded to remove artifact caches).
We could design new depinfo or json messages for rustc to report this "runtime" information to cargo (including "not cacheable"), much like we do for "compile time". However, this is a larger "API" surface to stabilize (messaging to Cargo vs a CLI flag) and Cargo performing runtime detection of whether it can share an artifact between workspaces increases the complexity risk dramatically. We are interested in exploring that (originally only considering build scripts) but only after we have an MVP.
I'm sorry, but exactly this: Let us experiment language is what really concerns me here. This is a major pain point for a large number of rust users, so we should likely choose whatever solution works and can be implemented first to address this. There is already ongoing work to cache the output of proc-macros in the compiler, which would hugely benefit from having this information. If you feel that your proposed solution could be implemented faster, please write an RFC for that. Otherwise I would suggest to not start with cargo, but there where the pure/impure information has the largest impact: In the compiler. If cargo then later want's to do something with that information as well, it certainly can request it from the compiler, but that's nothing that would need to be done in the first step. I believe it's better to keep features as minimal as possible instead of trying to put more and more things into the same thing. The later approach often ends up with having nothing in the end, while the former allows to make some process and allows to address at least the first 80% of the problem. In the end you are talking about functionality that the cargo team might implement in some distance future (given that these are major features, I don't expect them to happen in the next few versions.), while the my proposal is targeted at solving a problem that exists now and that gives a solution that could be implemented in the next months.
Additionally getting this information from the compiler/proc macro itself, also might make things easier for other build systems that are not cargo and that do not rely on Cargo.toml
files. After all these information are specific for a certain proc-marco and not for a whole proc macro crate. Given that there are already significant issues with coupling proc-macros crates with their parent crates (missing $crate
resolution, broken feature unification, etc) I would strongly suggest not to introduce more features that would lead to people splitting up there proc-macro crates even more.
Something is still going wrong in communication. I am not trying to add scope creep to this project. In fact, I'm suggesting we reduce the scope of this project but in a way that multiple projects can take advantage of it! I'd recommend taking a step back and re-reading my posts. If you have a question to better understand, please ask!
To boil it down a bit, IIUC your goal is not “cargo and rustc know and utilize that provided proc macros are idempotent” (which is scope creep over just rustc knowing/using that information) but instead “rustc knows and utilizes ~ in a way that cargo can utilize straightforwardly as future work.”
Or IOW it's about future compatibility only.
The counter is that cargo already needs to learn the set of file and environment variables that rustc reads to compile a crate in order to determine when to ask rustc to recompile a crate. The RFC as linked in the OP adds no new requirements, and if cargo gains the ability in the future to make caching decisions based on proc macros not declaring themselves free from undeclared dependencies, then that information can ride the exact same communication channel that file and environment dependencies already travel along.
Putting the setting in the crate manifest doesn't really reduce the scope of what rustc needs to do, only the granularity at which dependencies can be dismissed.
I would like to thank you for this summary. That summarizes my "problem" with the approach suggested by Ed Page in a good way.
While I agree that the build script API is a little wonky, I believe that we should design the user-facing API for this feature to match the semantics in there, and that the CacheStrategy
is a deviation from that.
To take a page out of the build-rs
crate's book (which might become official), in particular rerun_if_changed
and rerun_if_env_changed
, I'd prefer we have:
// Crate `proc_macro`
/// Track the given path.
///
/// Relative paths are relative to what? Directory of invocation? Proc macro crate directory?
/// - Posted as an unresolved question on the RFC.
///
/// If directory, the entire directory will be scanned for any modifications.
pub fn rerun_if_changed(path: impl AsRef<Path>);
/// Track the given environment variable.
pub fn rerun_if_env_changed(name: impl AsRef<OsStr>);
Such that the user-experience of using these ends up being:
#[proc_macro]
pub fn my_macro(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
// Communicate that the proc macro "opts in" to change tracking.
// Similar to "cargo:rerun-if-changed==build.rs"
proc_macro::rerun_if_changed("lib.rs");
// Return immediately if macro is disabled
proc_macro::rerun_if_env_changed("DISABLE_MY_MACRO");
if let Ok(my_env) = std::env::var("DISABLE_MY_MACRO") {
return input;
}
// Get file next to invocation
// Assumes https://github.com/rust-lang/rust/issues/54725
let path = proc_macro::Span::call_site().source_file().path();
let path = path.with_extension("my_ext");
// And emit that as the macro output
proc_macro::rerun_if_changed(&path);
std::fs::read_to_string(&path).unwrap().parse().unwrap()
}
These deliberately do not load the values, as the semantics of doing so for a file/directory isn't really define, and in some cases it may not even be desired (because it's done by something else further down the line).
That said, this is API details, I agree at a high level that this is a desired thing to do.
These two API's already exist, but with different names:
The problem is that these methods alone are not enough to decide whether or not a proc-macro needs to be rerun. For example sqlx::query!
connects to a database from a proc-macro. It expects to be rerun every time as it performs query validation and the database could change between each run. That's why this Pre-RFC seeks to introduce more variants, including a DoNotCache
variant to make it explicitly possible to communicate such behavior to the compiler.
Hmm, if the macro call shouldn't be cached, then you could just not emit any tracked_*
/rerun_if_*_changed
calls - again, just like you would do in a build.rs
script that doesn't want to be cached?
I mean, I get that it's not great to have that implicit, but the same argument applies to build.rs
, and I think there is great value wrt. teachability to have proc-macro
and build.rs
mirror each other in this regard.
Hmm, if the macro call shouldn't be cached, then you could just not emit any
tracked_*
/rerun_if_*_changed
calls - again, just like you would do in abuild.rs
script that doesn't want to be cached?
I don't see how that would work. There are cases like for example serde::Serialize
that don't need to track anything and should only be rerun if the input changes, then there are cases like diesel_migrations::embed_migrations!()
that essentially want to be rerun as soon as some directory changes and finally there are cases like sqlx::query!()
that want to be rerun if the database changes (so everytime).
With the two existing functions/build.rs like functionality you only can cover two out of this three variants. If you say it shouldn't be cached if non of the tracked_*
functions is called how would you model the serde::Serialize
cases? If you say it should be cached without other dependencies than the input how would you model the sqlx::query!
case?
There are cases [...] that don't need to track anything
Yeah, and for those you'd opt in to that with rerun_if_changed("lib.rs")
, rerun_if_changed("src/lib.rs")
or similar (depending on what the path is relative to), just like you'd do in a build.rs
script with println!("cargo:rerun-if-changed=build.rs");
.
You don’t care about any files in those cases, the input tokenstream is distinct from the file it came from (currently it’s not even possible to know which file it came from).
It's not as easy as it's in the build script case. If a proc macro uses an import from another module (in another file), we have to recompile it if that file changes as well. Also, we have to keep this path in sync when we move the proc macro around as well, which isn't great either.
For build scripts this is simpler, because they are a single file.
From a compiler-perspective it would also be nicer if we know ahead of running a proc macro whether it is cacheable (including by tracking files/env vars), or not at all. This would also make it possible to grab this information from the crate metadata, making it easier to integrate with, e.g., cargo, I believe. Probably can be worked around, but might add implementation time and complexity.
For context: I am the author of the PR that adds experimental caching for proc (derive) macros mentioned further up.
Thanks for reaching out.
From a compiler-perspective it would also be nicer if we know ahead of running a proc macro whether it is cacheable (including by tracking files/env vars), or not at all. This would also make it possible to grab this information from the crate metadata, making it easier to integrate with, e.g., cargo, I believe. Probably can be worked around, but might add implementation time and complexity.
The main motivation to emit this information only at the runtime of a proc-macro is that it might depend on the input and the environment of the proc macro. The Pre-RFC lists the following example:
This RFC proposes to provide this information by an explicit function call so that a proc-macro could conditionally opt-in to different caching strategies. Consider sqlx query! macro, that can connect to a database to perform checks for your SQL query. There is an alternative mode that only depends on local file information, enabled by the SQLX_OFFLINE environment variable. This macro could register itself as cacheable as long as the variable is set (by calling including SQLX_OFFLINE in the dependent_env_vars list for the Cache variant) and as long as the relevant offline files did not change (by including the relevant directory into the dependent_path list for the Cache variant). If the environment variable is unset the caching preconditions are invalided and the proc-macro is recompiled. It then connects to the database and can mark itself as
DoNotCache
as it relies on uncheckable extern state in this case.
This use-case wouldn't be possible if you need to declare everything upfront, although I'm not sure how bad that would be. (It might only affect sqlx)
If it makes the implementation much easier I would consider to go with explicitly declaring these information as part of the #[proc_macro_*]
attributes. That should allow the compiler to query these information before executing the proc macro, right?
I intended for rerun_if_changed("lib.rs")
to mean "rerun if the lib.rs
that this proc-macro has been defined in has changed", i.e. the path would be relative to the definition of the proc macro (rerun_if_changed(file!())
is probably more self-describing) - because this mirrors what you do in build.rs
with "rerun if the build.rs
that this build script is in has changed".
Admittedly, this is non-sensical, of course we need to rerun if the definition of the proc-macro/build script changed! But again, we do this to tell the compiler to activate its change tracking, which it otherwise can't, because of backwards compatibility, and because you don't want that in certain cases like sqlx::query!
.
Note that I agree that this is subtle, and I'm not against a better design here; I just argue that whatever we do end up with, it has to be the same for proc macros and build scripts.