Build script capabilities

(apologies if this has been discussed before, I couldn’t find anything)

I touched on this problem in [pre-pre-rfc?] Solving Crate Trust, but it’s something worth splitting out.

Currently, build scripts and compiler plugins (to a lesser degree) have unfettered power. They may end up doing whatever they want, and it’s hard to reason about them in the context of other build systems. There are a couple of things this impacts:

  • Caching/artifact sharing: It’s hard to cache the results of crates compiled with build scripts because we don’t know if a different project’s compilation may result in a different output
  • Integration in larger build systems: Most of the larger-build-system plans revolve around using cargo to generate the build plan, and then executing the build from the larger build system. build.rs gets in the way here and complicates things, especially if the build system would prefer to drive compilation of native libraries itself (see https://twitter.com/jsgf/status/1053732046948859904)
  • Security: It’s not great if the process of compilation itself has unfettered local access. This could be solved by sandboxing but it’s hard without knowing what each build script needs

I feel like a significant step in this direction would be some kind of optional Cargo.toml syntax for declaring what a build script can do:

  • Some build scripts are “pure”: they basically read files from the package source tree and output files to the outdir (which the rust code presumably pulls in somehow). The outputs of these scripts are easy to cache, and can be integrated into a larger build system easily.
  • Some build scripts may depend on a couple keys or environment variables but essentially do the same thing. If their dependencies can be declared beforehand, these can also be cached well
  • Some build scripts look for or build a native library. This stuff can sometimes be cached, and sometimes not. It’s less so a pure function, and larger build systems may have very different opinions on what to do here. Worth declaring that they look for and/or build library X, and also mentioning how shareable the compilation is (will different environment vars change things?)
  • anything (this is the default, maximum power)

Potentially you can ask cargo to disallow certain kinds of transitive dependencies based on build script capabilities (either in your Cargo.toml or whilst buildings), but the core of this feature is just having this information in Cargo.toml in some form so that other tools can use it. Cargo itself can also use it to share artifacts whenever possible.

I haven’t thought about this enough to have a pre-RFC, and I’m also probably not the right person to do this (my main stake in this topic is that I want to eventually have cross-project artifact sharing), but I feel this is a good place to start brainstorming.

9 Likes

Just as a curiosity:

Currently, multiple “-sys” libraries are a compilation error. Would declaring build.rs intent help in any way with making that not true anymore?

(This would probably be nicer to have by convention rather than a hard requirement.)

In general, I think moving away from the model of "run build script once and then use its side-effects" is a good idea.

For example, as an intermediate step from where we are now, I'd like to see a build script have a bit more of an API like something using the test harness:

  • build-script -l - list the things this script can do
  • build-script <thing> - perform one of those things, which could emit all the inputs to that thing
  • build-script -q <thing> - cheaply check whether <thing> is out of date or not

Also I'm thinking something like a lib would need to explicitly declare a dep on a build script with something like:

[lib]
buildscript = thing1 thing2 # requires things from a build script

This would also allow you to have multiple build scripts, so long as they have distinct things to disambiguate them.

One of the problems right now is that Cargo.toml doesn't explicitly list the input sources to a crate - instead it works out the top-level source then asks rustc to generate the full list of sources. But this will fail if the sources don't exist yet because they haven't been generated. I assume that currently cargo always unconditionally runs the build script before trying to compile anything else, but it would be nice to defer build script work until as late as possible. (It also has pretty ad-hoc ways of determining whether it needs to recompile/rerun the build script.)

I'm being deliberately vague about what thing actually is, but definitely falls along the lines @Manishearth mentions.

To expand on this a bit: if thing represents - say - a dependency on an external library ("I want to bind with openssl") then it means we can abstract it away. In a pure Cargo build it could be a build script which either builds it from source or uses pkgconfig to find the system openssl; but if we're embedded in - say - a Buck build environment then it maps to "I want to take a dependency on the normal openssl target" and let Buck sort out the details.

In other words, we end up where there's a cargo front-end generates a build plan for consumption by a build execution engine backend which can be implemented in multiple ways. In this model a build script is an implementation detail of the cargo build execution engine.

There's also a secondary annoyance here, which is that code which expects a build script to generate a source file ends up having an explicit dependency on Cargo's directory layout. It would be nice if we could have the build script emit "I wrote this here", and then have the source be able to reference that logically (include!("<build script thing3>") rather than explicitly (include!(concat!(env!("OUT_DIR"), "/thing3.rs)).

EDIT: I guess we could hack this in with an env-var convention: include!(env!("BUILD_SCRIPT_thing3")), though we'd definitely want to use better terminology than "build script" here.

(cc @Xanewok)

2 Likes

Nah, that has to be the case because of how linking works

Thanks for starting this discussion! We’ve run into this issue in a few ways in the past.

One place where it’s currently a problem is when building Firefox with tup–tup needs to know about outputs from build commands that are inputs to other build commands (like generated source files or C sources compiled via cc in build scripts) so that it can build a correct dependency graph. Our current workaround is to have a hardcoded list of build script outputs which is not very maintainable. Ideally we’d have a way to express that in the crate definition so tup would have enough information to do the right thing. (I suspect that bazel and other sufficiently-opinionated build systems have this exact same problem.)

The other place where I’ve run into this is building with sccache–there’s a lower bound on how fast sccache can make a cargo build even if it’s able to get 100% cache hits because we have to build and run every build script. (sccache doesn’t currently cache the compilation of the build script itself because it doesn’t know how to properly cache linker invocations, but we could presumably fix that.) While implementing the original support for caching Rust compilation I did a lot of testing building Servo and I noticed that we spent a lot of time running build scripts that would do things like compile a bunch of C code into a static library only for the output to be completely irrelevant because sccache was able to fetch the rlib that depended on that static library from cache! If we had knowledge of the inputs and outputs of the build script we could feasibly avoid compiling it at all and simply produce the outputs from the cache.

I do worry that trying to cram the various things that build scripts currently do into a declarative manifest format will be really hard. I’ve wanted to do a survey of extant build scripts in published crates to determine what the common patterns are for a while now, but I hadn’t been able to figure out a methodology that didn’t involve either a lot of manual work or writing a ton of code that duplicated something like crater so I never quite got there. I think that longer-term cargo might need to provide an escape hatch in the form of something like Bazel’s starlark configuration language to allow crates to express somewhat complex requirements without the need to compile and run build scripts first (which produces a chicken-and-egg problem with external build systems).

3 Likes

That's not actually completely true. For example, I've got a crate which supports linking multiple copies of a C library by adding a custom symbol prefix to each build which is generated based on the crate version number, and thus guaranteed not to conflict. This pairs nicely with vendoring the C library's source, which we do, but which isn't required. See the README here.

Also, commenting on the security aspects here: I’m definitely a fan of annotations that would allow sandboxing if possible. Presumably, these annotations give Cargo a maximum limit of how much it may sandbox, but up to that limit, it can do whatever it wants. E.g., it might create chroots, it might create Linux containers, it might create IllumOS zones, it might do whatever Windows does (lord knows I have no idea), etc.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.