A way to run tests for all your dependencies, recursively?

Problem

Currently cargo test only run tests for the current crate or workbench. Even the -p flag only seem to be able to select between packages in the current workspace.

There would be value in being able to run cargo tests for dependencies as well. For example, you may be targeting an architecture or operating system that the author of dependency did not test for. Or the dependency might no longer pass tests on the newest version of Rust (or on the older one you are using, MSRV isn't always correctly declared).

This would also be useful when packaging for distributions. E.g. Arch Linux does not create separate packages for each Rust library like Debian does, and as a result it will only run the tests for the final binary crate. See the Arch Linux Rust packaging guideline for more background info.

This should not be the default (can you imagine the memes about compile times if it was), but perhaps an option such as cargo test --dependencies -p '*' could work (e.g. --dependencies would enable also testing dependencies outside the current workspace).

How would this work?

It would use the test files that are included in the .crate files. Not all packages may currently include the tests that way. That is OK, but if this feature existed it would put pressure on the maintainers to include those tests in the crates, which would be a good thing.

The tests could build and run either as part of your target directory, or in a temporary directory, I have no strong opinions on those sort of details.

Discussion

Is this something that has been discussed before? I wasn't able to find anything. Would there be any major technical problems with implementing this? What would the pros and cons be?

8 Likes

I'd love to see a way to do this, and running tests on the specific target seems like a great motivation.

That said, I do know that some crates don't include tests in the .crate files because the tests and test data are large and they don't want every user to have to download those when most users don't need them.

One potential way to solve that would be to check out the git repository of the crate and run the tests from there.

(Relatedly, we should really have a way to verify that the uploaded .crate file matches the git repository.)

6 Likes

Here's a thread where this came out.

To me there are a few problems with the current state of things:

  • cargo publish doesn't run tests, but tests that depend on other files or their workspace layout can be broken during packaging. This feels like a workflow that is not supported in Cargo. As a crate author I'm unhappy about having to maintain a test setup that is untested by me. This might be solvable by running the tests during publish, and maybe cargo test --packaged or an extra cargo hack mode, but that's an additional CI time spent, and a slower publishing workflow.

  • I prefer not to publish data files for tests, especially when they can be pretty large. I work with image codecs so I have files that test largest possible images, and sometimes submodules with official test suites that can be huge. Large crates are annoying, especially in environments like hosted CI and Docker where it's hard to cache any of Cargo's directories.

  • While it's easy to exclude external files from packages, it's not possible to remove inline #[test] functions from src/ files that may need these files, which leaves the packaged tests broken. Due to private APIs not all tests can be in tests/ dir. Commenting out tests for cargo publish is problematic, because a "dirty" repository doesn't get .cargo_vcs_info.json file, which is useful for associating release with a repository, and sometimes needed to process relative links in README properly.

  • CVE-2024-3094 has shown that test files can be used to hide malware payload, and such files can be easily obfuscated to be impossible to review. I have to review tarballs of crates that I'm using for my work. Source code review is already difficult and tedious, and I'd rather just not have to worry about additional risk of binary blobs in tarballs.

  • The tests for crater runs and Linux distro packaging need to be downloaded only a handful of times, while downloads for regular cargo builds can happen literally million times more frequently. crates-io downloads are growing exponentially. Now in a single day it serves more crates than it did in the whole two years after Rust 1.0. The extra data is likely to get expensive. It feels like a huge waste to make crate tarballs larger for a once-in-a-million use.

2 Likes

Some potential solutions:

  • Maybe crates.io could have two tarballs per crate — one with minimal files for building a dependency, and an extras tarball with remaining files for tests and benchmarks.

  • or support "storing" files in crates.io tarballs similar to git LFS. For files excluded from the regular package, the tarball would store hashes of files, and have some mechanism to obtain them either from crates.io or a git repo.

  • or change the publishing workflow so that tarballs are not uploaded by users directly (which makes it possible to manipulate the tarball and put arbitrary files in there), and make crates.io fetch itself a specified git tag (or mercurial, etc.). This would be a stronger guarantee that the crates.io package is the same as a given tag, and the excluded files could be fetched from the repo.

1 Like

Some of my (git-using) crates store test data in the history of the project itself (merged into the history using -s ours) because git bundle files are a pain to juggle and amend. Not to mention being binary blogs to commit. These crates just don't test properly at all outside of their git repository. To that end…

How can one detect that it is a .crate build and not a git-based build without build.rs?

Unlikely. I'm not really interested in publish being gated on a single configuration. My project's publish job is CI-dependent on the builds I do care about passing (various feature flag selections, MSRV, clippy, etc.). If I cared about other platforms for these projects more, I'd gate publishing on those passing too, not just what happens to be doing the publishing.

I'm not sure what you mean by this? You don't have to detect those cases; the .crate file just doesn't support running cargo test from it at all, or alternatively doesn't support some of the tests.

1 Like

Oh, you're saying that the test action is just broken there? I though that those explicitly excluding the files had also manage to disable the code using a similar mechanism.

Depends on the crate, but sometimes it's just broken. Other times it works and self-disables when it sees the files it needs are not present.

Not quite true. Both Fedora and Arch packages run tests on rust code built from downloads from crates.io.

I certainly wouldn't want to download hundreds of megabytes of test vectors for all my crates either. In fact, I recall posts on Reddit and URLO where people with flaky metered connections complained about basically everything non-essential in a crate, including example code and docs, and they have a certain point.

That said, advanced test frameworks typically have multiple available test configurations. There are the usual tests, which are assumed relatively light and fast. But there can be also multiple separate test profiles for e.g. tests which require heavy external data, or a specific environment, or just take too long to execute. These heavyweight tests wouldn't be run by default, but can be opted in. It would be nice if Cargo supported something like that out of the box. At the moment this kind of test separation needs to be hacked in with something like special features, or build scripts.

If we had a built-in solution for this kind of test separation, we could also have extra metadata attached to heavyweight tests. I'm not sure whether crates.io would be a good place to store test data, but one could certainly download it manually from relevant sources.

I believe that most crates would be fine using only light tests, which don't require any extra support from Cargo, crates.io or the user. Even for crates which need heavyweight tests, it's likely that light tests could provide a nice first approximation to validation even in the absence of test data.

4 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.