Running in rust-analyzer

Right now, rust-analyzer doesn't run build scripts by default, for mostly historical reasons. Running build scripts (and proc macros) is sadly required to get decent IDE support, as those can generate rust code and affect the semantic model of code.

I am wondering if folks here have ideas about what would be the best user experience here. I'll refrain from publishing my own thoughts for now to avoid spoiling creativity :slight_smile:


I guess because is essentially a script file it is hard to tell which files it touches. Though it might be able to provide it with a mock of the file system crate and then track which files it reads from and only run it again if itself or any of those are changed.

cargo check already runs them, so I don't think it would be too terrible if rust-analyzer did too. Just don't re-run it on every keystroke :slight_smile:

But you may need to re-run proc macros, even on every keystroke, when the code they touch is changed (e.g. editing a struct definition with derive).

Would it be possible to run these asynchronously? present best-guess based on source without them, run them in the background, update results once they finish. This way slow build scripts/proc-macros wouldn't hurt latency of IDE experience.


For that reason, I think it would be ok after there has been a check, like checkOnSave. But I don't think arbitrary execution should be part of startup, just having opened a file.

Yeah, running arbitrary code from just looking at a Cargo project is risky. But because of cargo check the cat is out of the bag — I can't rely on it not happening. In cargo-crev I hide Cargo.toml to itentionally break IDE integrations to mitigate this risk.


I believe an initial implementation could simply re-run the whenever it is changed/saved and then re-index everything it writes to (by monitoring which files it opens etc.). Re-running the when something it reads changes, may, in the initial implementation be too much work. If so, having to re-save the to have any changes take effect from its inputs would not be the worst user experience.

If instead of attempting to monitor what does to figure out its inputs and outputs, would it be possible to add a new file that declares them? Something like this:

Filename: build.decl (contains declarations for, it not present, must be run manually by the user re-saving it) Contents:

path3=...potentiall other kind of paths, maybe with regex...


...same but files rather than directories...


name1 = "description of other kind of input 1"
name2 = "..."
namen = "..."

name1 = "description of other kind of output 1"
name2 = "..."
namen = "..."

Maintainers could/should be encouraged to add this file. Perhaps cargo could require it if there is a Rust analyzer (and similar tools, including potentially cargo) could leverage this file to know when to re-run In addition, if this file were incorrect for what the does, it would get noticed by a lot of people who were maintaining the crate as analyzer would constantly be getting things wrong if this file were not correct.

The "other.*" options are things that are outside the scope of files/directories and that analyzer (at least initially) cannot monitor and so cannot 100% reliably re-build the Analyzer could warn the user that they may need to manually invoke the build or if any of those resources change.


I think the best solution would be to compile the build script to WASM. Then it could be executed with read access to the current workspace and write access only to the target dir. More permissions could be given if necessary. The WASM runtime could also record which files/directories are read when the build script is executed for the first time, and inform rust-analyzer to watch these locations.

But I guess this would require some changes to Cargo and/or rustc, and we don't want to wait until these changes are implemented and land in stable Rust :neutral_face:

1 Like

Compiling build scripts to wasm would make it impossible to for example use bindgen, as it depends on libclang, which is compiled for the host and not for wasm.

Cargo has rerun-if-changed, but it has a flaw that makes it unusable for build dependencies, so I would be very happy to see some mechanism that replaces it.

But it needs to be compatible with delegating changes to build-time dependencies, because the build script may discover these files and their relationships at run time (e.g. searching for a package, scanning for HTML templates to compile, etc.).

1 Like

Let me dump my thoughts on the issue right now.

The fundamental problem here is security. and proc macros can do arbitrary things. It is not good that just opening some random Cargo project in an IDE for reading you might get pwned.

On the other hand, not running these things by default would result in a poor user experience (or rather, it already results in poor user experience today, and we want to fix it). Tools really should be zero config and as helpful as possible out of the box. Naturally, the first thought here is to ask the user "do you want to run build scripts?" on startup. I however, strongly believe that first use dialogs are an anti-pattren. They are an annoyance for experience user, and confusion and distraction for novices ("what's a build script?").

Performance and reliability are big problems, but are not as fundamental: asynchrony, proper error reporing, and fallbacks are a good enough solution here.

It's also true that we already cargo check on save by default, and may editors have auto-save, so, in a sense.

For these reasons, the current plan is to just enable running cargo check at startup by default (with an option to disable it).

I don't think rust-analyzer can do much here in terms of security. It's an underlying problem, bigger than just rust-analyzer's.

There was a prototype of WASM-based proc-macros. It'd be cool if Rust made it official:

but for build scripts I don't even have a solution. If you sandbox them, then lots of them won't be able to do their job (like searching the OS for packages or running arbitrary C build systems). And even if you sandbox them for Rust analyzer, a user working on a project is likely to run cargo test or run the binary anyway, so all that effort would only delay pwnage for 5 minutes.


I wonder if we should build out a more declarative system that targets specific use cases. Presumably this can be done in a backwards-compatible way. is mainly used for getting C dependencies, and I do not think it is possible to have a declarative system for C dependencies that isn't a pain to use. The C ecosystem has tried and tried many times, and hasn't found a satisfactory solution that works across different Linux distros, on Windows, macOS, iOS, Android and WASM.

It's a problem that is deceptively almost doable, but breaks at edge cases, and having C dependencies work well is all about handling the edge cases. Because scripts are able to have complex fallback logic and try many workarounds, specific for each dependency, they are paradoxically more reliable for getting C dependencies work than native C tools! e.g. LLVM has its own llvm-config replacement for pkg-config. OpenSSL build script needs a lot of custom logic. OpenMP needs compiler-specific hacks. libjpeg needs different assemblers and has 3 ABIs.

You'd have to invent a new build system that is a superset of all these custom scripts full of weird one-off hacks. These hacks are there for a reason — if you try to simplify and don't replicate all of the weird hacks, you'll get build errors. It would be amount of work comparable to making a Linux distro that unifies all Linux distros, and also works for all platforms from Windows to WASM at the same time.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.