The way we track regressions today with ‘regression-from-$channel-to-$channel’ tags leaves a lot to be desired. In particular, it is impossible to automatically derive any useful historical information about regressions per release since the tags do not provide enough information. I would very much like to be able to issue reports quantifying whether we are doing better or worse at preventing and fixing regressions.
Here is the type of information that would be useful:
- How many regressions were discovered in version X?
- How many regressions were fixed in version X prior to release?
- How many regressions were not fixed in version X prior to release?
- How many regressions in version X were closed as ‘expected breakage’?
It would be nice to change our process sooner than later so that someday we can do that analysis without going back and retriaging every reported regression ever.
I haven’t thought much about the process I’d prefer but here’s one possibility:
- Regressions are just tagged a hot-pink “regression” (or maybe I-regression)
- Every release has a “X.Y.Z regressions” milestone, and every regression is dumped into a milestone
- We leave these milestones open for tracking during the nightly/beta period, but close them once the release is out so they don’t pollute the milestone page
- There is also a “Stable regressions” milestone that stays open forever
- As regression milestones are closed those issues are additionally added to the stable regressions milestone
- This could also be a “stable-regression” tag
- There is an “expected-breakage” tag
- This seems the hardest thing to track. I wouldn’t expect devs to remember to tag closed issues like this, so probably somebody would have to go through each regression milestone before closing it and make sure everything was tagged appropriately
With this system we could easily find the regressions for each channel, and do queries that answer the above questions. Again, I have not put much thought into the specifics. Just a strawman proposal.
Opinions?