I was talking to @nrc about RLS reliability and they mentioned that one common problem is that new features added to rustc wind up violating some (perhaps accidental) invariant of the save-analysis data that the RLS was relying on (example). This is particularly problematic for new features, since they may use existing data structures in new ways.
I was thinking that we could help catch such problems faster if we leveraged our existing unit tests (much like we do the HIR pretty printer). We could setup a kind of âmultiplicativeâ harness by doing the following:
- For each test file we have, run with
-Zsave-analysis
- For each resulting save analysis file, run the rls-analysis crateâs âloweringâ step
This way, we would catch things that will break the RLS before they actually do break it. For some changes, I imagine weâll need to do the fix in the RLS, in which case I would imagine we would add a // no-rls
comment to the test and file an issue over in the RLS repository (after that issue is fixed, we remove the comment). Weâll also need some kind of âmaster switchâ to disable this altogether.
Some challenges:
- bors turnaround time is already too high
- how to manage the workflow (an existing problem we still havenât satisfactorily addressed, afaik)