Provide a way to always build required binaries

We recently split our project into multiple binaries. We now have one main binary that runs and calls out to the other auxiliary binaries.

The problem

cargo build still builds all the binaries, so it works on our project as before. The issue we didn't foresee is that cargo run by itself only builds the main binary. This is true whether the binaries are [[bin]] targets or separate workspace crates listed in default-members. This is a change in behavior for end-users and we were forced to add an error message explaining that cargo run does not work without cargo build.

There is an existing issue for this here. My mental model was always that cargo run is basically cargo build followed by an invocation of the binary.

There is a similar issue with cargo install. It works with extra [[bin]] targets, but not with default-members. This behavior was only introduced recently, and it would be nice to get the old behavior back (always install all binaries).

Proposed solution

There could be an opt-in config to change this behavior, as proposed on the issue. Perhaps, something like the following, and/or an equivalent option for [[bin]]:

default-members = [
	{ path = "bin/execute-worker", always_build = true },
	{ path = "bin/prepare-worker", always_build = true },

If something like this gets approved, we're willing to put in all the work to push it forwards.


I presume the eventual intended solution to this is artifact dependencies, but it's an unstable feature and not an option for us.

Another alternative we considered:

We are already discussing ugly workarounds of the binary being built extracting the other binaries from itself ... I would much rather invest the time improving cargo.

I wonder whether using a more general build system, such as, would be better than adding an extra knob to Cargo. Presumably, it at least would be a more immediately actionable solution for you.

If one doesn't want to add another build system on top of Cargo (and doesn't want to write shell scripts that may not run on Windows), it's also possible to use the xtask pattern to create a cargo subcommand that does the same as cargo build followed by cargo run.

But I would prefer to have more knobs in Cargo itself (such as a that runs after building)

1 Like

A general build system seems a bit of an overkill for what we need. We don't have any other languages than Rust in our project and so far cargo served us quite well.

If cargo run was to build all binaries, would one expect it to run all of them as well? If not, why would one expect it to build binaries that it will not run afterwards?


Switching build systems is a good idea in the long-term. But realistically, short-term it would be very painful, so I'm not sure it's more immediately actionable. Large, established projects will have many docs, tutorials, end-user scripts, CI jobs, etc. all using the cargo commands. I understand it's cargo's position not to be a general build system, but unfortunately, due to the pain of migrating away, in reality this leads to people piling hacks on hacks to try to do what they need to do. The link @dlight posted about postbuild is a good example. Anyway, I don't mean to derail my own topic - this discussion should be its own thread methinks. :slight_smile:

I understand why that is the default and it makes sense, but in some situations the main binary depends on other binaries and can't really run by itself. I guess that's what artifact dependencies are meant to achieve. Some questions about that:

  1. Does anyone have experience with artifact dependencies, i.e. are they "stable enough" to use in production?
  2. Does cargo install respect artifact dependencies, i.e. by installing all the required binaries?
  3. Are there any estimates, how much work is left to stabilize this feature?
  4. Would adding a new minor feature (our requested knob) be quicker to get into stable cargo?
1 Like

Can you explain in a bit more detail what is your actual use-case?

We have some auxiliary worker binaries that are dedicated to doing certain kinds of work. They are sandboxed as they deal with potentially adversarial inputs. Having them be separate, smaller binaries allows for a separate address space, cheaper startup, protection against ROP attacks, etc. If the workers can't be found, the main process has no choice but to tell the user to run cargo build and quit (not ideal UX).

Broadly speaking, multiple binaries is also just a useful pattern in general: you have server/client, daemons, auxiliary workers (our case), etc. See e.g. gpg, emacs, ssh etc. Maybe that's not a strong case for patching cargo run, but at a minimum, there should be a way to cargo install all the required default-members of a workspace.

I do believe cargo install installs all binary targets in a given package, not entirely sure how it interacts with workspaces though.

Yep, it installs binary targets but not workspace default-members:

(Ideally we'd like the binaries as separate workspace crates because it's more flexible. They can have their own build scripts, features, etc.)

I also would like to have such feature. In our project's workspace we have shared library libfoo and utility foo which links to it. If I am to run cargo run --bin foo or execute tests for foo without manually building libfoo first, I get a linking error.

Are you using the dylib crate type or the cdylib crate type + a build script? If the former it should just work. If the later maybe the unstable artifact dependency feature of cargo would help to make cargo build the cdylib and pass the location to the build script of the executable?

1 Like

I got the impression that they are using multiple binary executables that exec each-other to delegate some tasks.

My question was in reply to @newpavlov who said they are using a dylib.

Yes, artifact dependency with lib = true works in my case. But I wonder if we will be able to publish foo with an artifact dependency on without publishing source code of libfoo. packages always contain source code. When artifact dependencies are available, they will just be a way to ask for another package to be compiled to a binary that the dependent package can run, not a way to distribute binaries as packages.

I do not want to distribute the library using, only a safe wrapper for it and maybe a utility on top of the wrapper. We plan to distribute the library using pre-compiled DEB and PRM packages.

In that case, you can't use, but also don't need, artifact dependencies; just do what current library binding packages already do — declare linkage to the library using your Rust package's build script.

Please read my first message. Both the library and the wrapper around it are part of the same workspace. During development, when API is in flux, it would be nice to catch discrepancies between the library and the wrapper's early (even if it's an ugly linker error), but today I can not simply run tests on the wrapper or on the whole workspace without manually compiling the library first.