In big projects, there are a lot of tests, and some might depend on each other. So for example one tests for right deserialization but that depends on some internal state being okay. So I propose something like #[depends_on]. The problem is that there are so many tests that it might take some time to see. This flag could be passed to modules too, not only for individual tests.
Individual unit tests should be stateless. They can run independently if each other as well as at the same time. If you need state merging the tests into one or using an integration test may make more sense.
I assume from the description that this is about the output only, skipping running later tests that depend on earlier tests completing successfully to reduce the extra failure messages, not that there would be any statefulness between the tests?
That's my understanding as well. Something like specifying a DAG-like structure through test declaration macros so that the test runner can skip tests that will fail if another previous test has already failed. On the one hand, this could reduce test execution time if there are many tests that "depend" on each other and one of the main ones fails. But it could also increase the execution time if no test fails (because they would not be executed in parallel anymore).
Ok cool! Makes sense. Thanks for replying all!
I don't see why we couldn't "optimistically" run tests in parallel and just ignore the output if it later turns out that a prerequisite failed.
The motivation here doesn't seem particularly strong, but I don't think it needs to have a performance downside – I'm thinking the main downside is just the complication of having an additional feature that any human or computer interacting with the testing system needs to understand.
Perhaps a better name would be
#[same_fail(...)] to indicate that it will fail in the same way? Or perhaps
Btw there's a crate for serial tests already which works well: serial_test - Rust, which with some work could do the same thing (with modules for example)
Tests could be executed as Futures, with dependent tests awaiting their dependencies. This won't solve the concurrency issue perfectly, but it will reduce it a little.
I think I did not express myself well. What I meant in that last sentence was that since the tests depending on each other would now be run sequentially, this could lead to an increase in execution time if at some point there were fewer tests running than threads available to run them. In this case, just running through all of them on multiple threads (either using os-based threads or a future executor) could be faster (even though more tests are run in the end).
It is my understanding that such a system would only reduce testing time in cases where there are few really big tests. Unless we allow some leeway and run tests that depend on a running test (which therefore does not have to be run yet) in order to maximize the thread/future pool usage.
I think this would have the disadvantage of having to wait for all the tests to finish before displaying an output, because the first one to run might depend on the last one.
How so? You'd still run the tests in dependency-aware order, you'd just additionally run extra ones that are slightly ahead in the ordering when you happen to have spare cores.
So one would need to use the dependency in the function and keep it in sync manually in the attribute? What kind of diagnostics will be possible to know when a dependency is missing or when one is no longer necessary?
Tests which depend upon shared resources or initalization state can do so via OnceCell, Mutex, Arc... I can imagine some complicated patterns that are difficult to express now (in particular, what if I want some test cases to
skip!() if the initialization failed). Can you create a compact motivating example here that isn't amenable to sharing initialization state?
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.