I understand why stable-nightly regressions get special notice, because they aren't supposed to happen. At the same time, it is a bit of a dodge to say "a thing that should work in stable, because all of its components are marked stable, need not have regressions tracked unless it was confirmed working in stable".
From my point of view, which is not especially privileged, there is a thing that is supposed to work in stable, and indeed has been working now for a while in nightly (my thinking: "yay, it has been fixed"). Then it stopped working and it only now occurs to me that whether it works or not may not actually be on anyone's radar.
Since you are right that whether a thing ever works in nightly or not isn't particularly germane, maybe what would be better (edit: "than not", vs "than stable-nightly regression tests") is a set of tests of "things that should work in 1.0 stable, but don't". I don't think the 2k issues on github are doing this.
I totally get that the regression report is just one signal, and one that should get some priority (regressions are embarrassing, and give explicit information about what broke where). At the same time, it seems that other things are still broken, I wanted to provide that input. I've since cut my code down to "subset of 1.0 stable that actually works", as otherwise (as you say) no promises.