Pre-RFC: Cargo build and native dependency integration via crate-based build scripts


Posting this in advance of today’s Cargo meeting.

I’d like to propose a step towards a solution for the problem of integrating a Cargo build process with native dependencies, and with broader build systems or projects, such as massive mono-repo build systems, or Linux distributions.

Right now, the biggest problem facing such systems involves scripts and the arbitrary things those scripts can do. Such systems typically need more information about native dependencies that are embedded in, so that they can provide their own versions of those dependencies, or encode appropriate dependencies in another metadata format such as the dependencies of their packaging system or build system. Right now, such systems often have to override the script themselves, and do custom per-crate integration work, manually; there’s no way to introspect what does, or get a declarative semantic description of the build script.

However, I don’t believe that we should enshrine a single solution for that problem into Cargo itself; at least, not at this time. We need to explore the problem space further, and I believe we should do so via the crate ecosystem.

Thus, I’d like to propose the following:

  • Introduce a simple way to declare in Cargo.toml that the crate’s build script ( should come from one of its build dependencies, rather than from the crate itself.
  • Build scripts provided by a build-dependency crate would implement a trivial interface: they get called with no parameters, they should get all their information by parsing declarative Cargo.toml, they print the standard build script outputs that Cargo expects, they potentially generate code, and they return an Error if anything goes wrong.
  • Start experimenting in the crates ecosystem with a family of crates that provide various fully declarative build system steps. These steps are easily aggregated without further help from Cargo: we can have a “metabuild” crate that allows listing multiple such build steps, and its build script would invoke each of the individual build steps in a well-defined way, allowing them to feed into each other if needed.
  • Work with the ecosystem of broader build systems and packaging systems to create well-defined ways to override those individual declarative build steps, such that those systems can know that if they handle each individual step of a build process, they’ve completely subsumed the necessary functionality of the build process.

Over time, we may also make it possible to pull out and override some of Cargo’s built-in functionality into build scripts, such as dependency location, resolution, and even building of dependencies, enhancing currently hard-coded mechanisms for doing so. In doing so, I’d recommend trying to treat those steps of Cargo as a library, rather than as a layer that has specific points for extensibility; for a parallel, see the discussion of the midlayer mistake in the context of the Linux kernel. However, we need a good declarative hook mechanism first, and that’s what I’m proposing here as a first step.


I’m really excited about this direction, and like the plan of incrementally getting there through experimentation in the ecosystem.

One thing that gave me slight pause, however:

This particular setup isn’t compositional: it implies there is always a single build script for a crate, and a single place it can come from. Might we consider instead a way of aggregating build scripts, essentially by saying for a given dependency that you’d like it to provide a build script for your crate as well?

I do think it’s a good idea for the downstream crate to have to opt in to running an upstream build script, for clarity about what’s happening at build time.


I proposed that approach quite intentionally, and you’re right, it isn’t compositional. My intent is that we don’t know yet precisely how we want to compose such build scripts; in the simplest case we just want to call all of them in some order, but in more complex cases, they might need to make information available to each other, or declare dependencies that constrain the order in a DAG, or run builds of dependencies in parallel, or expose the build steps to an external build system like bazel, or otherwise allow substituting the implementation of the composition method itself…

So, given that, my proposal is that we provide a simple interface to run one build script provided by a dependency, and then in the ecosystem, we can have a crates like metabuild that define a means of composing multiple such scripts. The first such crate can do the obvious “run a list in the order provided in Cargo.toml, and fail if any of them fail”. If we need to change or enhance the method of composition, we can just upgrade a metabuild-style crate or create a new one, rather than having to change what we’ve baked into Cargo. If we start out with a quick “good enough” way of aggregating build scripts, that raises the barrier to experimenting in that area.

If we converge on a single well-established way to aggregate build scripts, it might make sense to move that mechanism into Cargo natively, if doing so gives us some advantage. But taking a cue from the steps already baked into Cargo that we’ve talked about making more configurable and customizable, let’s not bake that in quite yet; instead, let’s minimize the surface area of the Cargo interface, and maximize the amount of experimentation we can do in the ecosystem.


We talked about this in the Cargo meeting today. Some of the points discussed:

  • We discussed the point above, whether Cargo should support running multiple build scripts. On the one hand, adding minimal support for that would simplify life for the common case; on the other hand, it might discourage experimentation in the ecosystem. We came to the conclusion that we should err on the side of making life simpler for users who want to use multiple build scripts, but also carefully provide information in the Cargo documentation and the messaging around this feature to encourage experimentation with metabuild-like crates.
  • We need to nail down the interface for Cargo to invoke build logic from a build-dependency.
  • Related, we should provide guidance and examples for how to parse declarative information from Cargo.toml; we might also want to provide a standard library for doing so.
  • We want to make it clear that this doesn’t completely address use cases like bazel or debcargo; rather, it forms one part of the solution, by helping to address build scripts and make them more substitutable.
  • In the future, we may wish to pull some of Cargo’s own native build steps out and make them pluggable and customizable; this mechanism might allow us to do so.


It would be awesome if we could just use cmake to build Rust projects, especially (but not necessarily only) Rust projects that also need to build non-Rust code. In particular, it would be great to have cmake guaranteed to be available whenever cargo is available.

It is possible that somebody could create something better (in Rust) than cmake, and some would argue they already have, but I think cmake is good enough, and much better than the current situation.


Is there room in your Weltanschauung for people who think cmake is actually worse than autoconf+gmake, and actively avoid having it on their computers at all for fear that it will somehow become involved in some operation where it wasn’t welcome, and by doing so, fuck everything up?

Kidding. Except not.


There’s also the opposite position. :slight_smile: In my case, cmake is literally the best thing that ever happened to my C++ code. Even all the goodies in C++11 can’t compete with the fact that having a build system which actually works reliably, intuitively, consistently, and many other positive adverbs removed a tremendous source of wasted time and agony and allowed me to finally focus on doing my real job.

We probably shouldn’t tie cargo specifically to cmake though. Any meta-build system ought to be able to invoke cargo equally easily.


@briansmith, I think you may have the wrong impression of this pre-RFC. This isn’t about rewriting Cargo to be a different build system; Cargo will still be Cargo, and it’s unlikely to ever require a third-party build system of any kind. This will, eventually, make it easier to integrate cmake with Cargo and vice versa, along with many other build systems, but that’s not the same thing.

@briansmith, @zackw, @Ixrec: Let’s please keep things civil here. There’s room for many build systems in the world, and this is not the place to argue the merits or drawbacks of any particular build system.


Why limit the design space to such narrow requirements? IMO, I think just integrating CMake is a great alternative to doing what you’re proposing here, as its already an declarative and extensible build system that’s widely used and widely supported, including even native support in IDEs like CLion and Visual Studio. It would seem to require very little support to guarantee that CMake is available and integrated into Cargo, whereas what you’re proposing here seems like it would be significantly more work. If it is significantly more work then it should be significantly better, but I don’t see what would be significantly better about building a new Rust-specific mechanism for building non-Rust code, compared to just delegating it all to CMake.


I don’t know how CMake support works in CLion, but I can say that Cargo is close to an ideal build system from the IDE perspective.


I am not very familiar with native dependencies world, so one thing in the RFC seems a bit confusing to me.

My understanding is that the ultimate goal is to have all information about native dependencies in Cargo.toml, so that build scripts might be trivial: a call to a library function with a single argument describing the information in the toml. This should allow to understand what build script is doing by only looking at the toml file.

If this is correct, I don’t understand where the part “ should come from one of its build dependencies” comes from.

Would it be possible (hypothetically) to leave the current interface of Cargo to build scripts exactly the same, but write a native_build crate which is intended to be used in of varios -sys crates exaclly like this:

extern crate native_build;

fn main() {

The native_build then would read metadata from Cargo.toml and use that info (and only that info) to determine what should be build and how.



Note: that cmake has server mode, it is mode similar to concept of rls, IDE can request information about project, its configuration and so on things online with json base protocol. On some early stage of cmake server mode development as I remember cmake server mode also supports autocomplete and other features for cmake langauge, don’t know what is state of this now.


This would not help the myriad people using Rust and Cargo with other build systems.

We already have that mechanism, in the form of scripts; this proposal enhances that existing mechanism to make it more declarative.


This is effectively what we’re talking about doing, except that we’ll have multiple such crates for various kinds of native builds (see the ecosystem of existing crates like cmake, gcc, pkg-config, and so on, as well as things like bindgen and lalrpop), and Cargo will generate the script itself rather than requiring you to write one.


I’ve commented on a related RFC that I don’t think fully declarative system is widely applicable. It’s tempting to think that C libraries are just a matter of running configure/make/cmake, but C libraries/systems/package managers create a lot of unique problematic cases.

I grew to appreciate the -sys ecosystem of Rust. Each library has its quirks, and maintainers of -sys creates sort these things out — once for everyone. While there are many similarities across build scripts, there are no two identical cases. I’m slightly worried that if there’s a purely declarative method that tries to work for everything, then each user individually will rediscover that it almost-but-not-quite works.


I do like idea of limiting scripts to only well-defined inputs. I would like uniform way (tracked by Cargo) of specifying preference for static vs dynamic linking, and overriding library locations (in a very general sense of location, i.e. not just include/lib dirs, but some libraries have their own structure, helper commands, and data directories).


I came to this thread after a few days of automating a semi-complex build using cargo, involving several native dependencies, and the requirement to have it work on both mac and linux. It has been quite painfull.

In the recent past, ive worked with conda quite a bit. Any such discussion on what cargo ought to be doing that isnt informed by what conda does, is doing itself a huge disservice. All of the problems I have run into in the past days, and which are being discussed here, are things that conda solved.

My main takeaway is that cargo/rust needs to let go of the notion that building complex dependencies from source is something that will actually scale to complex software across multiple platforms. As long as every native package does not take full responsibility for building its own crap and distributing it, you are in a world of pain.


Can you describe specific pain points?

What features from Conda would you recommend adopting?


I just posted an RFC for metabuild:


It would help integrate with cmake :).

I think it is better to look at integrating with another build system (cmake, nix, bazel, etc) as a way to potentially cheaply implement declarative native dependencies as a replacement for files, but not easily integrating into the other build systems.

The downside of most of these build systems is they need some package definition to be written by a human, for everything in the transitive closure. Nix probably has the highest number of native packages available to be depended on, but doesn’t support windows. Other build systems have different trade-offs.

Haskell’s cargo equivalent, stack, delegates to nix for native dependencies, so it has been done before.

As an example, this libz-sys/BUILD is roughly equivalent to the With something like this the problem changes from building native dependencies to integrating the alternative build system into the cargo ecoysstem instead. With bazel in particular, there are not that many native dependencies that already have a BUILD file available somewhere.