Chainable Tests

Hello! I have a compiler that I want to test with integration tests. Now currently it seems like there's no way of chaining tests.

So I propose a way for tests to make dependencies.

This would look like this:

#[test]
fn a() -> A {
    let a = A::new(); // this takes long
    assert!(a.is_valid());
    a
}

#[test(a)]
fn b(a: A) {
    let b = B::new(a); // this also takes long
    assert!(b.is_valid());
}

#[test(a)]
fn c(a: A) {
    let c = C::new(a); // this also takes long
    assert!(c.is_valid());
}

So here the tests should wait that all dependencies in the test attribute are done and were successfull, then their results get put in as the n-th first arguments for the function (so multiple dependencies are also possible)

Nit: since both b and c depend on a, they should receive an &A rather than A.

Also, this reminds me of the fixtures feature that other test frameworks have (e.g. - pytest). Tests don't depend on other tests, because the role of a test is to test something - not to prepare for another test. But they can depend on fixtures, which their job is to prepare for other tests.

Of course, while pytest's fixtures are name-based, in Rust they should be type-based:

impl Fixture for A {
    fn fixture() -> Self {
        Self::new()
    }
}

#[test]
fn a(a: &A) {
    assert!(a.is_valid());
}

#[test]
fn b(a: &A) {
    let b = B::new(a); // this also takes long
    assert!(b.is_valid());
}

#[test]
fn c(a: &A) {
    let c = C::new(a); // this also takes long
    assert!(c.is_valid());
}

Fixture dependencies is not as easy with a trait as it is with Python's full runtime introspection, but it's still doable:

impl Fixture for A:
    // lifetimes will probably be a pain, so I'll let smarter people figure them out
    type Dependencies = (&D1, &D2);
    
    fn fixture((d1, d2): Self::Dependencies) -> Self {
        Self::new(d1, d2)
    }
}
3 Likes

Here's a small demo of how i would go about it

struct A {
}
impl A {
    fn new() -> Self {
       
        A {}
    }
    fn is_valid(&self) -> bool {
        true
    }
}
struct B {
}
impl B {
    fn new(a: &A) -> Self {
        B {}
    }
    fn is_valid(&self) -> bool {
        true
    }
}
struct C { 
}
impl C {
    fn new(a: &A) -> Self {
        C {}
    }
    fn is_valid(&self) -> bool {
        true
    }
}
fn setup_a() -> A {
    let a = A::new();
    assert!(a.is_valid());
    a
}
#[test]
fn test_b() {
    let a = setup_a();
    let b = B::new(&a);
    assert!(b.is_valid());
}
#[test]
fn test_c() {
    let a = setup_a();
    let c = C::new(&a);
    assert!(c.is_valid());
}

I defined setup functions that create and initialize the required objects for each test. Each test function then calls the corresponding setup function to obtain the necessary objects. If any setup function fails, the corresponding test will also fail.

good catch

but this would do the heavy computation twice

1 Like

Yeahhh thats right. I didn't say what you gave was wrong, Was just sharing my own idea

tl;dr the best way to move this idea forward is to help contribute to libtest-next.

The testing-devex team was formed to coordinate effort on improving the testing experience within Rust.

Currently, our focus is on json output for libtest so we can improve the UI and performance around cargo test. This is a bit slow going because we are trying to wrap up other commitments before we have enough bandwidth to focus on it.

The rest has not been discussed and agreed to within testing-devex.

Last I talked to libs-api, they were concerned about the compatibility guarantees around libtest and instead wanted to see first-class custom test harness support. This would make it more natural to pull in a library to do fancy work like you are suggesting.

I've started libtest-next with the following goals

  • Vet the json output design with more complex test interactions
  • Vet the custom test harness work
  • Be a focal point for custom test harness work so we can hopefully have one framework that libraries like trybuild, criterion/divan, etc plugin into rather than everyone re-inventing the wheel
5 Likes

Is this ChatGPT? Do we have a policy against posting AI-generated messages?

This seems like a level of complexity which is more suited to custom test-driving code, rather than a standard macro.

Also, I don't know what your heavy tests are supposed to do, but you should seriously consider decoupling them. Introduce intermediate serializable representations, which would allow tests to be totally independent. You'll save yourself a lot of trouble down the line, both because independent tests can be run in parallel, saving your time (are you aware of cargo-nextest?), and because it makes the test less brittle and easier to use.

1 Like

Well my use case is that I want to test my compiler for different scenarios in AST and IR, until now that are all under a single test, but I would like to decouple them. So it wouldn't make them slower than they are right now

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.