FWIW, one more thing this unfortunately does not resolve is the entire "interaction with memory shared with untrusted parties" discussion. That requires some kind of "atomic volatile". I am bringing this up just to make sure we all agree that that is a separate issue that would still need solving even if we adapt this.
So, coming to the "unordered" proposal: My main problem with this is that while the C++ memory model is extensively studied, these extensions are not. Right now, as Rust uses the C++ memory model, we can rely on a whole lot of academic literature that provides us with a good foundation of things we know about that model. These are not just theoretical concerns; that academic work has uncovered several serious bugs in the definition of the memory model (the very first C++ model was entirely broken until Mark Batty fixed it, and even then some issues remained and the out-of-thin-air problem remains unfixed until today).
So, what is the situation as we go beyond C++?
"unordered"
There are some models coming out of work on Java that have "unordered", but the only one I know of that has both "unordered" (they call it "plain") as well as the other orderings and fences Rust already has is the Promising Semantics. I really like that paper, but at least the original model is incompatible with ARM or POWER (I forgot which) and there are also some optimizations one might want that it does not allow. It is also a big and subtle model so there are probably other issues. Oh, and it doesn't have non-atomic accesses at all; "plain" ("unordered") accesses are the weakest accesses they have.
The Promising Semantics are a great first step to give an operational specification to things like relaxed and unordered and solving the out-of-thin-air problem, but I do not think they are ready for adaption by a programming language.
LLVM
So, alternatively, we could say "we do whatever LLVM does".
The issue is, the LLVM memory model (to my knowledge) has seen no academic study at all. To be concrete, I am not aware of a single formally defined memory model with some reasonable theorems about it that follows the LLVM idea of making write-write races (two overlapping writes racing with each other, at least one of which is non-atomic) not UB but instead making them reset the affected memory to "undef"/"uninitialized".
I personally am not even convinced yet that it is possible to do the LLVM-style treatment of write-write races while also doing all the other optimizations. Remember that LLVM's model is axiomatic: it is specified as "give me a model that has all of the following properties" -- without any indication that such a model exists. This is the approach that leads to bugs like this when applied to things like pointer provenance; the bugs it causes in concurrency memory models will likely not be any less subtle.
I talked with some weak memory experts about LLVM's treatment of write-write races (I do not consider myself one) and we ran into some interesting questions about how to even define a data-race: in LLVM's world, where programs with data-races can go on and have well-defined semantics, the axiomatic C++ definition basically fails. It feels like you need an operational model to make progress here (and so far the Promising Semantics are the only operational model we have that handles relaxed accesses).
Conclusion
By adapting "unordered", we basically commit to do whatever LLVM does in terms of concurrency. That is a huge risk. Prior experience from C++ and Java shows that naively defined concurrency memory models are full of flaws.
Moreover, we (in the UCG) are already regularly experiencing the pain caused by other parts of Rust that rely on LLVM semantics (such as integer-pointer casts or getelementptr inbounds
). Unfortunately, LLVM has a history of not specifying the assumptions optimizations and programs may make, respectively (IR semantics are specified for the "happy case" but what exactly is and is not UB is frequently even less clear than in C/C++) and they are also not very willing to improve their docs in that regard. If we also rely on LLVM for the concurrency semantics, we add all of concurrency to the list of areas for which we will regularly be unable to give precise answers to reasonable questions.
On top of that, I think there is a significant risk that LLVM's model is unsound in subtle ways, meaning it does not actually permit all the optimizations that LLVM developers think it permits. For C++, the question "is that optimization correct" is at least mathematically well-defined and there is a bunch of literature proving the reorderings compilers do. Without having a formal model, we might not even be able to assign blame ("out of these 5 optimizations that together caused the bug, this is the one that is wrong"). And that is assuming that we even notice such a bug and are able to locate and minimize it, which is notoriously hard for concurrency issues.
That's why, the way things stand right now, I am opposed to leaving the scope of the C++ memory model.
Alternatives
From what I understand, while relaxed accesses can't be quite as aggressively optimized as @comex thought, they can still be optimized better than what LLVM is currently doing. So it might be worth to try and better exploit the freedom we have, instead of changing the language.