I must say I’m in awe of you people, it’s the second time someone in this thread formulated some of my thoughts while I didn’t even know I had them.
I do follow some RFCs so I know these are not true in the form you state them. But I believe there is a thing in the process that work in part as you describe. The thing is human subconsciousness. It has very important part on how we think ‒ and I believe this is one thing Rust as a language uses well ‒ for example, the ownership model (after getting used to it) feels much more natural and subconsciousness is able to grasp it.
I know you (as in, the teams) do research about the effects of the proposal. However, when deciding, arguments are not considered equal, they are weighted. Arguments that are more readily available are perceived as stronger (that’s how human mind works). Therefore, if something is written in prominent place in the RFC itself, it will be seen as stronger argument than something deep down, lost in hundreds of comments, or the thing in the comments really needs to be very impressive to balance that.
Furthermore, you said you rely on the community to provide some of the feedback, and while not directly following it every time, you get influenced by it. You say that the RFC process is not a popularity contest, but the popularity has some influence. And even if you do full research about the effects of RFC, members of community usually don’t have as much time to do that ‒ with day jobs that don’t allocate time for Rust RFCs.
Therefore, I believe something written in the RFC has more effect on the decision in general than something not written there. Asking the author to write it there both saves the research of many people and gives the information equal(er) chances.
Again, written in this form it sounds somewhat ridiculous and obviously is not truth. And I believe it’s more nuanced here than with above.
First, let’s consider a vastly simplified model, where a proposal is either accepted or rejected (I know it gets modified during, but let’s simplify for now) and that there’s an universally good decision.
Every point on the process is human-driven. And humans err. So, there’s a chance (let’s say 5%) that a team member decides wrong. Depending on how many 95%-good checks are on the way, the percentage of bad decisions that reach the final stage will differ. But unless it’s 100%, you get some percentage of bad ones on the output. So, if you are able to influence the input and turn some bad ones to good ones, you just get a better overall results. If you manage to lower the error rate by providing more accurate input (because the decision is partly based on expertise, partly on future guessing and, ‒ sorry, but everyone here is human ‒ on random effects like the amount of coffee), you get better overall results.
So, the question is, how many checks there are? And it would be inviting to count the members of corresponding team making the decision. But this would be the case only if you were deciding independently. You know each other, because you work together. Your decisions are going to converge over time slightly. Furthermore, you’re all the best experts on Rust out there. In this case, it is a weakness (on the university, we joked about the professors not appreciating the depths of students’ ignorance). If I selected a group of, say, C++ programmers, the sensitivity of the group towards what a bloat is would be somewhere else (my own experience would suggest an alergic reaction to bloat ). Neither is the correct one and I agree the input of the community helps in part ‒ nevertheless, you have the final say. Here I’m trying to arm you against the inherent bias brought by being experts.
Therefore, if there’s a relatively low-cost way to improve both ratio of good ones and quality of the information about how good every specific RFC is, I think it might be worth trying. And if it saves time during the research phase for many other people, I guess the argument for low-cost is rather easy.
And there’s a third thing. There are not only good/bad proposals. Even between the ones that make it, there are better and worse ones. When a proposal arrives, people try to improve it. However, the amount of energy people spend improving it is somehow limited ‒ not a hard limit, but depending on the motivation. If the proposal is good to start with, you’re bound to have more motivation (because it’s joy to work with) and as the starting point is better, the result will also be better. And, if there are too many „bad“ ones sucking energy, there’s less energy to improve the good ones.
It this case, yes. And I think if you make a random selection out of the accepted RFCs, you’re much likely to find the good cases, because, as pointed out, the process already does damn good job. But can you honestly say there was no „Huh, why not?“ RFC? An if not yet, that there will never be? That there’s nothing to improve on? Of course you can argue that the error-rate is so low it doesn’t pay back to try to fix it.
I guess this one is more about perception of the dange of bloat than the danger itself. I mean, if I read mostly the forum, my view on the future turns bleak after a time. I probably should read more of the „mature“ RFCs to have a happier outlook. Anyway, same argument as with „improving a better input gives better output“ applies here as well.
I saw the original idea in this direction (negative RFC) as being able to prevent feature bloat ‒ if once decided something is bloat, it would never get in. However, now I see passing a negative RFC while still in the controversial state would not be realistic ‒ so that wouldn’t fly. Documenting the knowledge was a natural byproduct of that that doesn’t address the original problem (at least, not in any direct way ‒ arguments about lowering the standards by seeing too many re-surfaced RFCs would be quite far fetched). But it seems to solve another problem .