Testing save-analysis against the code base


I was talking to @nrc about RLS reliability and they mentioned that one common problem is that new features added to rustc wind up violating some (perhaps accidental) invariant of the save-analysis data that the RLS was relying on (example). This is particularly problematic for new features, since they may use existing data structures in new ways.

I was thinking that we could help catch such problems faster if we leveraged our existing unit tests (much like we do the HIR pretty printer). We could setup a kind of “multiplicative” harness by doing the following:

  • For each test file we have, run with -Zsave-analysis
  • For each resulting save analysis file, run the rls-analysis crate’s “lowering” step

This way, we would catch things that will break the RLS before they actually do break it. For some changes, I imagine we’ll need to do the fix in the RLS, in which case I would imagine we would add a // no-rls comment to the test and file an issue over in the RLS repository (after that issue is fixed, we remove the comment). We’ll also need some kind of ‘master switch’ to disable this altogether.

Some challenges:

  • bors turnaround time is already too high
  • how to manage the workflow (an existing problem we still haven’t satisfactorily addressed, afaik)