Proposal for Custom Test Framework support

I’m an intern on the rust team working on custom test framework support in rustc.

I’ve written up my proposal: https://blog.jrenner.net/rust/testing/2018/08/06/custom-test-framework-prop.html

Which is currently implemented in this PR:

Any and all feedback would be much appreciated!

6 Likes

First off, thanks for working on this! I really appreciate anything that pushes testing forward :blush:

I have some feedback: I might be just not familiar enough with how testing frameworks are currently built on Rust, but when I read your post I don’t quite understand how you’d do something like this:

  1. Have an idea for a testing procedure. (e.g.: quickcheck ~> I want to generate cases, run them against the test, and any failing ones get reduced according to some transformation until they become minimized cases).
  2. Implement that idea with this new tooling. (e.g.: quickcheck ~> how do I ensure I can generate cases, how do I get access to the test function, and then how do I take a reducing transformation from the user and make minimized cases?).

Sorry if this is too obvious, but I’m really not at all familiar with the underpinnings here :sweat_smile: but I also really want to understand how this will look like for testing frameworks in the future :slightly_smiling_face:

I don’t want to bikeshed this too hard, but I’m a little worried that the choice of test-runner is a crate-attribute, since that implies different crates can and will use different test-runners. I expect there’ll wind up being half a dozen or so competing test-runners, each with different feature sets, and there’ll be one particular one I like best - perhaps it summarizes results in a way I find easy to comprehend at a glance, maybe it produces machine-readable output in a format I can use with other tools, maybe it uses debug info to provide richer diagnostics than the standard test-runner interface would allow.

At any rate, once I have a favourite test-runner, I’d want to use that with every crate I work on, regardless of what test-runner the crate’s author prefers. If the choice of test-runner is set by a crate attribute, I’d have to tinker with the code before running any tests (running the risk of accidentally breaking the code), and it would be difficut to automate (without a full Rust parser and AST to manipulate).

If possible, I think a better solution would be some kind of Cargo.toml option, since it’s safer and easier to edit. Best of all would be an option in ~/.cargo/config, or an environment variable, so I can set it once and have everything always use the test-runner I chose.

I wrote a bit about stdout/stderr capture on the eRFC. How does this proposal handle stdout/stderr capture?

Is it just via saying “the test runner can do something” such that the status quo remains, or is the intent to bring in a more robust system?

(Having thought about it some more, I think the “ideal” way to capture all stdout/stderr is to spawn a process for each unit of parallelism and communicate with them via inter-process channels to get them to run tests, and then capture their stdout/stderr that way.)

@Screwtapello

Allowing different crates to pick different test runners was a core design goal. The idea is that some crates will want a richer definition of a test than another and so we need the ability to enhance that interface. I hope to standardize a Testable trait in libtest which would constitute the common ground for tests. All test formats should probably implement that interface and other Testable types should have a blanket definition to allow the libtest Testable to be used.

The goal is to enable projects to get the testing data they need, not to increase the flexibility of someone testing someone else’s code. Thankfully formatters are pretty easy to implement and get added to your favorite test runner.

re: Cargo, crate attributes have a well-defined command line interface so it’s possible to build whatever cargo wrapper we want (and I agree we should probably have one).

@felix91gr

No worries, happy to shed some light on things.

So quickcheck’s logic fits nicely into the test case itself. That is, it doesn’t require a richer interface than a normal test. The easiest solution is just to make a procedural macro that would work in place of quickcheck’s compiler plugin they use today.

That macro would output a regular old #[test] annotated function, and we could use the built-in test runner or any compatible runner.

1 Like

@CAD97

This proposal maintains the status quo on stdout capture which (I believe) means it implicitly relies on a future library-level solution to capture.

I believe the multiple processes thing could work if you have the test executable execute itself with different options. This would be implemented in the runner.

@CAD97 I’ve used your notes in the comment to implement a test runner that uses multiple process for stdout capture!

It compiles and runs using my PR, and you can see the output here:

(I just did the naive process per test approach for brevity)

1 Like

That looks nice, and it’s good to see that the basic idea works. Obviously, the example is just to show process-isolation working, and a full runner would actually use a set number of processes in parallel.

If we end up going with this or a similar setup, I’ll be happy to help implement a version of libtest or another test runner that does process-isolation. (If only servo/ipc-channel supported Windows…)

I am in over my head reading you conversation, but I think proptest uses the https://crates.io/crates/rusty-fork crate for exactly this kind of process isolation.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.