I have found a number of times that there are crates on crates.io which do not have working tests, this can cause example code in documentation to fail for users of libraries. I think that it would be useful if cargo test was performed automatically before the crate is able to be published, that way encourages crate owners to ensure that the crate is in a good state.
Do people have any views on why this would be a bad thing?
I’d like that.
Perhaps it could be generalized to a pre-publish hook, like npm has? This way I could auto-update readme, pre-compile some data files, run tests, etc.
I think that those fall into a different category and may be more like things for cargo-release, whereas I think that the tests are things which relate to the quality of crates and thus should be used by all (hence including in the cargo publish command).
Note that by design Cargo does not guarantee that crate as it is upload to crates.io passes tests.
For example, it’s common to remove tests from .crate file via exclude key.
During publishing, Cargo verifies that crate as it is uploaded to crates.io builds (because that is more or less required), but testing is not done.
I was thinking to have it more as a gentle reminder to ensure that prior to publishing the crate tests ok (maybe with a flag to allow overriding like allow-dirty), rather than a server side rejection of the crate.
Also even if the tests are not distributed, surely it would at least make sense to ensure that the doc tests are valid to ensure that people using the libraries are not trying to use the library wrong.
In general, I like the idea of running tests before publishing.
On one hand I see this as a reasonably easy way to get new maintainers to realize that they broke a test before publishing a new version. On the other hand, most maintainers are already executing tests before publishing a new crate version.
I agree that most do run tests (hence this should not cause much disruption to crate owners), however this would ensure that they have and help improve the quality of crates. This is especially true of doc tests, where if a crate is published the doc tests are part of how to use the interface and thus I would argue should always pass.
The onus is on the crate authors to have a well formulated CI process that makes his step redundant. Testing is a wide term especially in rust where you could have a variety of rust versions, platforms, combinations feature gates enabled/disabled which could make it difficult for cargo publish to generalise it for all
Surely running docs tests should be a sensible minimum, which would at least ensure that the docs that are auto-published are valid.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.