What a great comment! I find myself in agreement with much of what you are saying here, so this will be mostly about clarifying a bit my position and what I am trying to achieve here.
A tale of complexity and complication
I understand that some difficulties of unsafe emerge from essential complexity (i.e. stuff that is fundamentally hard, like proving any kind of theorem about arbitrarily aliased pointers), and that other difficulties emerge from accidental complication (i.e. stuff that is only hard because of usability issues, like unsafe Rust’s “do not trust safe abstractions” rule).
Here, I am aiming for the latter kind of issue, and agree that the former kind can only be overcome through teaching, experience, and obsessive amounts of QA.
I do not think that either of us really knows how these two kinds of difficulties are distributed. Is it mostly the former? Mostly the latter? A bit of both? You seem to be partial towards the first interpretation, whereas I seem to be partial towards the second interpretation. In defense of the latter, my experience of software ergonomics has always been that software engineers tend to have a strong psychological bias towards avoiding the “bad ergonomics” explanation, instead preferring to blame either their users or the essential complexity of their product. I know that I am no different, so I consciously attempt to correct this bias by forcing my mind in the opposite direction a bit. Maybe I am pushing it too far, though, you tell me 
Best-case usability matters too!
The hypothesis which I build upon, rightly or wrongly, is that there definitely is a non-negligible ergonomic component to the difficulty of writing correct unsafe code. That there are things which we could do to hint our users in the right direction, and which would come at a relatively low cost when writing correct unsafe code, but which we do not do yet.
I have provided a couple of examples of what I perceive to be low-usability areas in Rust’s unsafe abstraction design, which you have agreed with, so it seems to me that without further proof, we can at least agree that this hypothesis is not completely ridiculous. But you do raise a very good point here, which is that the best answer to a usability issue does not simply make it harder to do the wrong thing. It also keeps it almost as easy, or even makes it easier, to do the right thing.
An unfortunately widespread failure to understand this fact is what gives real-life safety interlocks a bad reputation: many of these do make it harder to do the wrong thing, but at the cost of also making it harder to do the right thing, which is not usually perceived to be a good compromise even when the harm that is prevented is much greater than the harm which is inflicted.
In contrast, well-designed safety systems hint the user away from the error before it has even occurred, ideally going as far as to make the error impossible. Like the little knife design cues which tell you which side of the blade is the sharp one, or the “auto-ignite” gas appliance mechanisms that make sure that you cannot turn on the gas flow without lighting it up.
As you say, Unsafe Rust cannot, by its very nature, reach the “no error possible” usability nirvana. It exists so that if it is written correctly, Safe Rust will be able to. So our role model for Unsafe Rust ergonomics should not be an auto-ignite gas appliance, but a good knife: still dangerous, yet so easy to grasp that you do not need that much training to use it safely. More exactly, it should be a sharp knife: as everyone who regularly uses knives know, dull knives are more dangerous than sharp ones in spite of being theoretically less so, because the frustration of cutting anything with them will lead you to start doing stupid things with your hands. Annoying safeties are worse than no safety at all.
Reducing mental footprint
I think that clarifying the semantics of unsafe abstractions would be a step in the right direction, because from my understanding it actually would make it easier, rather than harder, to implement them correctly.
Today, devising an unsafe abstraction involves a small set of complex decisions. Like “where do I need to write the unsafe keyword?”, or “should I mark this function as unsafe?”. These decisions are few because each use of the “unsafe” keyword has a tremendous amount of power in today’s Rust. But they are also complex because analyzing all this expressive power comes at the cost of a higher mental footprint.
Whenever you write “unsafe” in your code, you need to pause and think “hey, why am I doing that?”. Is it because you want to do something unsafe? Because you assume something that is potentially critical to your code’s safety, or that of someone else’s code? Because you guarantee something that is critical to memory-safety? Or maybe a combination of several of these things? Are you sure that you have thought about all of it? Have you documented it somewhere?
And then, because unsafe does not mandate documentation of the underlying contract (another thing which I wish static analyzers like clippy or rustc’s warnings would lint on), there are chances that other people who will need to maintain that code much later in the future, including yourself, will need to go through this complex thought process all over again. What does each unsafe keyword in this code mean? What is it about? Am I sure that I can fully understand it? Can I trust the comments to tell me everything?
All this mental complexity leads to mistakes, which in the case of unsafe means safety bugs. So I wish we could part away with some of this complexity, by doing what humans always do when encountering big chunks of complexity: breaking it down in smaller pieces.
What this proposal aims at
And this is, in a nutshell, what this pre-RFC is about: cutting the big scary blob of unsafe abstraction semantics into smaller pieces which the human brain can more easily digest, using the opportunity to also augment the semantics of “unsafe” with notions that we have been talking about forever in the Rust community, like unsafe contracts, without yet being able to express them in code.
By making “unsafe” less monolithic, we turn the process of designing an unsafe abstraction from a set of few complex decisions to a set of many simple ones. We are no longer marking methods are unsafe and reverse-engineering what that means after the fact, instead we…
- …start by putting an unsafe block inside of them (“Rust, please remove the safety net!”)
- …then realize that we need to make an assumption inside of that block (“Hmmm, I really need that slice index to be in range…”)
- …then encode that assumption as an unsafe precondition.
- At that point, ideally, our favorite static analyzer starts linting us that we need to document that precondition. Ah, yes, this is true, we need to do that.
- Then some development passes, and some time later, we realize that we need to make the same assumption many times, and that we always guarantee it in the same way. So we extract the code which offers that guarantee in a dedicated function, or perhaps a trait if we want to allow for other implementations in the future.
- “But well”, our helpful static analyzer then starts wondering, “how am I, or your future self and collaborators, to figure out that the contents of this function are critical to memory safety?”. Of course, the analyzer is right. So we mark the function’s output as featuring an unsafe postcondition, and in order to avoid another static analyzer lint, we immediately document what the postcondition is about.
In this model, developing unsafe abstractions has moved from a complex exercise in reverse-engineering the semantics of the language and of your code, into a more iterative process that naturally evolves from the unsafe block, which is the root of all unsafe development workflows, via a stream of small, simple, and self-contained abstraction design decisions.
Certainly, the price to pay for making each of these decisions easier is that there are more of them along the way. But overall, I think that when dealing with limited human brains, this is almost always the right trade-off.
This will not resolve anything!
…nor does it have to. It is intended as a step forward, not as a silver bullet.
The process of making Unsafe Rust easier to understand and use has begun a long time ago, with the first visible milestone being perhaps the publication of the Nomicon. It has gone through a large amount of intemediary steps, involving many blog posts, the Rust Belt project, and the ongoing unsafe Rust guidelines and memory model efforts.
Not every step was a direct success. Some RFCs were rejected, some std APIs like scoped threads had to be discarded in a backwards-incompatible way because even the best Rust devs could not fully tell what is safe from what isn’t. But even these apparent failures turned out to be mere learning experiences, which the community could use to move forward and avoid making the same mistake twice. Now we have sound scoped threads. Well, as far as we can tell, anyway 
This pre-RFC is intended to be a stepping stone towards a future of easier unsafe development, which of course may never materialize:
- By clarifying what unsafe Rust is about, it allows Rust’s famed static analysis tools (including rustc’s warnings and clippy) to go further in their analysis than was ever possible before, and hopefully to ultimately help unsafe code developers more at their task instead of less (since, by common agreement, they are the Rust developers who need most help).
- By being designed for minimalism today and extensibility tomorrow (all the way to much more complex and tantalizing features like @gbutler’s statically checked contracts proposal), it makes gradual evolution of unsafe semantics a viable endeavour, rather than imposing from the start a daunting and impossibly large compatibility-breaking change.
Maybe this pre-RFC will ultimately be accepted, possibly after an indefinitely long library-based probation period. Maybe it will be hard-rejected. Maybe it will be left in limbo forever. Maybe it will be superseded by a better idea. But whatever happens, I hope that this thread will be remembered as a useful contribution to the continuous Rust improvement process of figuring out what Unsafe Rust is, what are its pitfalls, and how we can best avoid them…
But to go back to the core topic, yes, even if accepted, this pre-RFC won’t be enough to kill today’s TrustedLen. All it does is to introduce new abstraction vocabulary that can be used to devise its eventual replacement. And to clearly differentiate safe/unsafe abstractions and lint against unsafe code relying on safe abstractions. And to gradually phase out the “unsafe precondition means unsafe body” footgun, which we cannot just turn off in the compiler right away due to backwards compatibility promises.