I'm pretty sure this part is false for the pre-1.0 discussion, there are acknowledgements in the discussion to them being different.[1] My interpretation is that they didn't want to be too much more restrictive than C,[2] but didn't want the soup of C int types, so the performance hit was worth the cost on "legacy" or exotic systems like a SNES.[3] (Some of those making that decision are still around, so there's no reason to rely on my interpretation if you want to ask them.)
I also don't see how it follows for Group A. For example the w65 platform implementation thread wanted the independent size_t != usize
without changing usize = uintptr_t
. And also Ralf's comment just above.
In that case, support for CHERI should definitely be opt in... if ever supported on Rust 1.x.
Is this agreed upon generally? I've not gotten this impression re: fuzzy_provenance_casts
before, but perhaps I just missed it.[4]
Let's say I just missed it. Then making fuzzy_provenance_casts
a hard error[5] could basically be an ecosystem split approach, wherein Rust's guarantees without opt-in are extended to size_t == uintptr_t
. The RFC for such should not be phrased as usize = size_t
, it should be phrased as uintptr_t = size_t
. usize = size_t
follows from the pre-existing usize = uintptr_t
guarantee. And that would be backwards compatible; the question becomes is the (future-when-uintptr_t != size_t
-platforms-are-sorta-supported) ecosytem split acceptable.
If it's acceptable, I believe this could be phased in over time in a non-breaking manner.
But I take it that what you actually had in mind is basically what you were pitching before, wherein backwards compatibility is not guaranteed but only a "best effort", as per the exceptions listed, and usize
is redefined. I still don't see how that can be introduced in a non-breaking manner. Or how the benefit of a smaller-than-usize
-index-integer can be introduced without the redefinition.