This seems like a great plan for tests, but I don’t think it will work for benchmarks. I suspect it won’t be great for fuzz tests either, but I’ll let Manishearth comment on that.
It makes sense to abstract over test runners because all unit tests produce pretty much the same output - name, pass/fail, optional error. This holds for quickcheck tests and cucumber tests and JUnit tests.
There’s no such common denominator for benchmarks. We can already see that - Criterion produces a number of statistics that libtest benchmarks don’t. Will you add a TestEvent variant that includes distributions for all of the various statistics? Now, you could add one that reports a series of iteration counts and times, sure. Will you then add another for measuring and reporting the amount of memory allocated per iteration (as has already been requested)? And another for reporting information from CPU performance counters (also requested)?
The output formatting of a benchmark harness has to know about the measurements that harness makes. It doesn’t make sense to have interchangeable formatters and runners for benchmarks like it does for tests.
It seems to me that everything you propose could be done as a crate on top of Manishearth’s proposal except for register_test_component!
- and the goals for that could probably be accomplished in another way. It would be a great crate - by all means, please build that crate. For benchmarks, though, I’d really rather just generate my own main
.