Add rustc flag to disable mutable no-aliasing optimizations?

The correctness of a given program may be global, if it's putting pointers everywhere. So while correctness of the program over all inputs may require global reasoning in the general case, the Stacked Borrows model is still extremely localizable. In fact, this is exactly what safe references do: they take (whatever formal aliasing model the Rust Abstract Machine requires) and, by the use of lifetimes, restrict it to a locally-machine-checkable subset.

The Stacked Borrows model is essentially taking raw pointers and assigning them the exact same lifetime rules as references have. The only difference is that the lifetime extent of the raw pointers (point-of-creation to point-of-last-use) is under your control, not bound to some (non)lexical scope. Importantly, this directly implies one of the rules you take issue to:

  • Deriving a pointer from &mut, then using that &mut again (for any purpose) invalidates the derived pointer.

This is exactly the same rules enforced by the borrow checker. Maybe it helps to turn the condition on its head: rather than "using this reference invalidates all pointers derived from it" (less local), it's "while derived pointers are still alive (may be used in the future), it's illegal to use this reference" (more local).

So you localize pointer provenance reasoning the exact same way you do with references: you document what the lifetimes are. The lifetimes are obviously more complicated with pointers (otherwise you'd just use references), but they still exist, and you need to document and follow them to make reasoning about correctness tenable.

I find Stacked Borrows surprisingly simple to reason about. Perhaps I'm just overconfident. But at the same time, I think it's very similar to learning to work with lifetimes for the first time: it can seem overly restrictive at first, but as you learn to structure your logic in the way the system wants you to, it gets easier to work with and reason about.

Stacked Borrows isn't perfect as is. Specifically, I'd like to see a proposed solution to the "&Header issue," because that's a large footgun I don't think is strictly required to build a consistent and useful model. But I don't find it as problematic as you're making it out to be.

I don't think the answer is a significantly weaker aliasing model. Rather, I think the solution is more best-effort lints for patterns like let a = &mut x as *mut _; let b = &mut x as *mut _; *a; that are statically detectable, that give the better way to do it that doesn't run afoul of the aliasing model (let a = &raw mut x; let b = &raw mut x; *a;). It's in providing more tools to write code that happens to be unsafe without shaming the developer with too much syntactic salt.


On the other hands, that also means that raw pointer rules end up being "the borrow checker rules, except you are the one enforcing them". I would think that casting pointers to &T (or &Cell<T> or &MaybeUninit<T> or whatever) already is suitable for the cases where the pointer patterns already conform to the borrow checking rules, and that the individuals working with raw pointers are interested in working outside the borrow checker rules rather than working within them.

It's not true that the raw pointer rules are the same as the borrow checker rules. In fact, raw pointers are very lax; if you stick with only raw pointers, you can do almost whatever you want.

The tricky parts come when you have raw pointers and references and are converting between the two. One issue with Rust as it stands today is that it's difficult to avoid references if you don't want them. But as @CAD97 alluded to, that's set to improve – addr_of! was recently stabilized, and a more ergonomic &raw mut should be stabilized eventually. There may be more things that can be done in this direction.

That said, even if you did have a type whose semantics were just "a reference, but without the borrow checker", there would still be plenty of valid use cases for it.

After all, the borrow checker reasons about what the program might do based on its types, whereas aliasing models reason about what the program actually does. Since the type system has limited expressiveness, there's necessarily a gap between the two.

As a simple example, here I temporarily stash a reference in a global variable:

static mut MY_GLOBAL: *mut usize = ...;

fn do_something() {
    let mut x = 42;
    unsafe {
        MY_GLOBAL = &mut x;
        // Now other code can get and set `*MY_GLOBAL`, but should make sure
        // not to store the `MY_GLOBAL` pointer itself.

        // Now reset it before the lifetime goes out of scope:
        MY_GLOBAL = /* some other value not using x */;

But there are other use cases that do require the laxer semantics of raw pointers – with respect to aliasing, validity, nullability, and alignment – which is why they have those semantics.

MSVC does not implement type-based alias analysis and hence effectively has -fno-strict-aliasing always on.

Wow. I definitely didn't wake up today expecting to praise a Microsoft product for its design. :^D

Question: is there a good reason that raw pointers don't auto-deref like references do, or is that literally there just to make them even more of a pain compared to references? You would think that using that would be limited to unsafe blocks, but if it's in an unsafe block, it would make sense to me and would make the situation considerably less awkward. (*myptr).derp is painful to type.

A macro is absolutely the wrong way to do this. They are now adding compiler magic macros just to get around their own problematic aliasing rules. Imho the macro never should have existed and Rust should have had &raw, but without the justification for &raw being aliasing.

Creating a reference from any pointer should never be UB. It should only be dereferencing it, at worst. It genuinely disturbs me that it feels like the previous sentence is a controversial one here.

You may or may not have meant this, but this phrasing suggests you are thinking of the "extra state" as being attached to the pointer, representing its history, which is not how Stacked Borrows works. So to clarify "local" a bit:

The state is attached to the pointee, representing the current set of valid pointers and references, kept in a stack ordered by which ones were derived from which. Using a pointer a particular way always has exactly the same effect on the state for the target location- loading from a pointer pops any unique references derived from it, storing through a pointer pops any other references derived from it, etc.

(In particular this is why creating a reference can invalidate things- it has the same effect on this state as reading (&x) or writing (&mut x), such that it now "knows" via x's borrow stack that it can provide its usual guarantees, assuming nobody uses any pointers it invalidated.)

One important consequence of this is that you simply don't have to reason about the history of pointers, let alone "construct the set of all possible histories of a pointer." Pointers can be passed around freely without affecting the stacks associated with memory locations. Instead, you reason about the history of the things they point at, which you were already doing!

Complete side note to the main discussion, but this is silly. The macro isn't there to be magic, it's there to be a placeholder until the exact syntax for &raw is worked out, after which it can be just an ordinary macro. The same thing happened with await!(x) becoming x.await.


I'll just mention that std::ptr::addr_of/std::ptr::addr_of_mut are also necessary to get a pointer to an unaligned or uninitialized field, it's not just about aliasing.

The reasoning is that, since raw pointers are not guaranteed to be valid and can easily cause memory corruption if dereferenced at the wrong time, it should be very clear where you're dereferencing them. There have been proposals before to have a more ergonomic way to do this.

Enough with the emotionally charged language.

References are required to be non-null (allowing Option<&T> to be the same size as &T, where None is represented as a null pointer), dereferenceable (allowing them to be marked as LLVM dereferenceable, which in turn allows LLVM to insert speculative loads), and properly aligned (also used by LLVM). Converting a pointer to a reference is UB if it does not satisfy those criteria. This is longstanding behavior. If you want a pointer that can be anything at all, do not use references.


There certainly is a way: whatever the borrow-checker does soundly approximates the rules of Stacked Borrows. And it is also not too hard to come up with reasoning principles for correct usage of raw pointers that are correct no matter the concrete "prior history" of the pointer. As @CAD97 said, the rules are basically the same as for references -- except that raw pointers are copyable, so when you have a *mut, you can make as many copies of it as you want and they all basically count as "the same pointer" as far as aliasing rules are concerned.

This is a strawman argument. We are not talking about whether provenance is intuitive or not, since we are not talking about whether Rust has provenance. It definitely does. Even your model has provenance!

You keep making assertions about Stacked Borrows as if the same would not apply to your own model, but most of them also apply to your model. I encourage to try and think concretely about how you would implement a Miri extension that checks your model; then you will realize that your model, too, requires provenance -- and in fact more provenance than C without restrict; you need some form of "history" as well maintaining where a pointer came from.

This is like arguing that reasoning about concurrent programs leads to a combinatorial explosion. Sure, naive reasoning about concurrent programs does, but better techniques exist and the Rust borrow checker (as well as a long line of work on concurrent program logics such as this one I am working on) demonstrates that one can reason about concurrency without a combinatorial explosion.


To add to the example by @comex, here's something else you lose by not invalidating raw pointers when their parent pointer is invalidated:

let mut s = 0;
unsafe {
  let ref1 = &mut s;
  let ptr1 = ref1 as *mut _;
  let ref2 = &mut s;
  let ptr2 = ref2 as *mut _;
  *ptr2 = 2;
  // <code not using s, ref2 or ptr2>
  assert_eq(*ptr2, 2); // is this guaranteed to succeed?

More generally speaking, I would expect that in code like

  let ptr2 = ref2 as *mut _;
  // <not using ptr2 or ref2>
  let _val = *ptr2;
  // <not using ptr>

I can replace the last ptr2 by ref2. The argument is that ref2 is unique, so there cannot be aliases to it or ptr2, and ptr2 is not escaped (passed to foreign code), so no new aliases violating uniqueness could be created.

But your model does not have this principle -- the moment I turn ref2 into a raw pointer, even locally and temporarily, all the old raw pointers are "revived" and now the floodgates are open for arbitrary mutation. For example:

fn example(x: &mut i32) -> i32 { unsafe {
  let ptr = x as *mut _;
  *ptr = 5;
  *ptr // can we optimize this to 5?
} }

This optimization is much simpler than the one brought up by @comex, since only reads are being reordered, not writes. It is valid under Stacked Borrows (with precise raw ptr tracking), but invalid under your model. (other_function could have access to a raw pointer that was created previously and that got "revived" by creating ptr.) This demonstrates that in your model, when there is a raw pointer cast, the optimizer has to give up essentially everything. In other words, when people use unsafe to get more performance, they will lose performance because even simple analyses like the above do not work any more!

It gets even stranger if we make the raw pointer unused:

fn example_dce(x: &mut i32) -> i32 { unsafe {
  let _ptr = x as *mut _; // This is dead code but we cannot remove it
} }

If other_function uses "revived" old raw pointers, then by removing this dead cast, we might introduce UB to the program! Now, to be fair, dead code elimination of casts is a lot more tricky than it might seem -- in particular, casting a pointer to an integer has side-effects under many formal models and hence cannot be optimized away easily. However, I think this is more acceptable for a niche operation such as ptr-to-int casts than the "much less insane" ref-to-ptr cast -- and Stacked Borrows with -Zmiri-track-raw-pointers does allow dead code elimination of such casts (stock Stacked Borrows does not, but only because I basically treat ref-to-ptr casts as ptr-to-int casts).

This might even affect functions that do not use raw pointers in any way, depending on details of your model we have not probed yet... is the following code okay for you?

let mut s = 0;
unsafe {
  let ref1 = &mut s;
  let ptr1 = ref1 as *mut _;
  let ref2 = &s;
  let ptr2 = ref2 as *const _; // this revives old raw pointers
  *ptr1 = 1; // are these pointers allowed to mutate?
  return *ptr2;


Of course each time we weaken Stacked Borrows, we lose some optimizations. This is a land of trade-offs, and I am not suggesting that we cannot lose any of the optimizations that Stacked Borrows provides (indeed then we couldn't fix any of its issues). But if we lose too many optimizations, the model ceases to be worth the effort it takes to comply by it. If raw pointers cost a lot of performance, people will be pushed towards using references more than they should. So this always has to be weighed against the kind of code that is newly allowed -- we have to weigh the gain against the loss. I am not convinced that code relying on "dead" raw pointers being "revived" is the kind of code we want to support, the kind of code we want people to write.

Also let me point out one rather ironic fact: Stacked Borrows as implemented in Miri will actually accept all these examples you are asking for, because raw pointers are barely tracked and Miri makes no effort to "distinguish" different raw pointers. -Zmiri-track-raw-pointers changes this, it enables precise tracking, but by default that flag is off. So in a sense, Stacked Borrows with all its complexity already provides the model you are asking for (and I'd be surprised if you could achieve the same effect with a significantly less complex model, but it'd be a nice surprise). Do you have any code that you think is okay but that Miri (without -Zmiri-track-raw-pointers) rejects? That would be interesting because it would show that you want more than "just" indiscriminately reviving all old raw pointers.

However, and this is where things become ironic, people are constantly tripped by this. I am regularly getting bug reports that Miri fails to detect aliasing UB somewhere, and it often boils down to code that revives old raw pointers (example, other example, long discussion where people voice their support against reviving raw pointers). People don't expect old raw pointers to be revived. What you consider "more intuitive", "more local" and "easier to reason about", is considered "surprising" and "spooky action at a distance" by other programmers. It's not just me and @CAD97 who consider it more intuitive that raw pointers would be subject to the same kind of scoping that references are subject to.


Ah, yes, your example is better.

Intuitively I would want to say that, under @tcsc's idea, it would still be UB to access ptr1 or ref1 during // <code not using s, ref2 or ptr2> – because the access to ptr2 at the end extends the "lifespan" of ref2, and any accesses within that span not derived from ref2 are violating the unique access condition.

But this is exactly equivalent to saying that any access to ptr1 or ref2 during that time permanently invalidates ptr2.

It's a good example of how deceptively complex aliasing is – or really, how deceptively complex compiler optimizations in general are. What seems like a simple optimization from the compiler's perspective, when transformed into a list of rules you must follow at runtime to justify that optimization, ends up very strange and nonintuitive. But you're the expert on that already :slight_smile:


Are you sure you don't want to just add the option to rustc and cargo to disable the aliasing rules if the user wants to?

Maybe it's not obvious, but there's another benefit to adding -fno-strict-aliasing, other than feature parity with almost all C and C++ compilers.

It allows the rustc team to go with even more aggressive optimizations, if there is a way to turn it off. It's clear Rust is trying to squeeze every bit of performance out that it can. I understand that.

I find the argument that user code might require a unique &mut to be a non-issue. For example, C's memcpy() has aliasing requirements, and users still manage to use it correctly. I do not foresee a lot of deliberate use of this so much as I foresee it being used as a safeguard against UB.

There is inherent danger in passing a reference created from a pointer to anything, so if the code malfunctions, well, it would also malfunction if you created the reference from a null pointer or a half-overwritten freed value, too. That's the nature of unsafe. It should be the programmer wielding the unsafe keyword's choice, not the compiler author's, on whether to sacrifice freedom for performance.

Being able to turn this off alleviates the issue because then accidentally having two &mut s to the same object can only be UB if you access the data at the same time. That's an easy requirement to meet, and unambiguous.

Please don't try to tell me there's user code that can tell if there's another &mut sitting somewhere. That's just not true. (other than user code built with strict aliasing, that's why cargo must make the option recursive).

It will only care if you're accessing that value in the other reference. If by some miracle Rust really does support a runtime method to sniff out other &mut references created from the same pointer, please link me a source code example of that, I'd be very interested in seeing it.

For all the pushback against the -fno-strict-aliasing option, I don't really believe the objection to it on technical grounds, and I'm sorry if I've been snippy before, but that frustrates me, because I'm trying to figure out why it's so controversial. This prevents UB and unsafety, which is what Rust is all about.

If you add the option, turn off whatever aliasing optimizations you want, and show in a benchmark the performance difference, then warn users what they're giving up if they turn it on.

I'll be blunt: The big reason I'd turn on -fno-strict-aliasing is to alleviate my anxiety and rule out an accidental aliasing bug as a cause of miscompilation I observe. Accidental concurrent access would be a different story, but with very different and distinct symptoms, so I'm really not concerned about that at all.

For everything, what I personally do know for sure is that the option can be added, that it would work as intended, and that for any user code objections, there are several solutions, the one I'm behind being a recursive cargo option for dependencies.

Honestly these rules are severe enough to seriously make me consider reaching for C++ the next time I have to deal with questionable pointers, or a lot of raw pointers at all, really.

I've been told that somehow some of my posts are offensive. I'm sincerely sorry for that. I'm not trying to be combative, I'm confused and more than a little bit frustrated as to the objection to the original request of this thread, and I very strongly believe that forcing such aliasing rules with no opt-out is a mistake. I am sincerely frightened at the prospect of a future with these optimizations unable to be disabled. That's not hyberbole, that's honesty.

What you are failing to realize (or less likely don't want to realize) is that what you are proposing is essentially a new language.

One that looks and feels very similar to Rust and can even call Rust. But that Rust itself cannot call (without all the same rules applying). Even C code needs to follow the same rules with the pointers that Rust passes to it.

I think that is the main reason why there is pushback against your proposal. This community doesn't want to split itself and cause confusion. Namely if a Library were to be developed with your proposed flag in mind, ALL USERS of that library MUST also us that flag.

At least that is how I see it.


How so? I would set it for dependencies, not for dependents. The idea is to make it safe for the -fno-strict-aliasing code to use other Rust libraries, but I wouldn't go so far as to also require any crates above it to also require the flag. If you're writing a crate, you should be sure that your users won't hit UB. That's just good practice.

I should also note that merely enabling this does not ensure that you will hit UB, or that you even need to be careful. All it does is decrease the probability you can hit UB at all.

Now I think that's a bit much. No, I want the borrow checker and everything else to be exactly the same. I'm just trying to turn off some optimizations. I'm not trying to change the semantics or syntax of the language at all. And as above, yes, Rust code can call into it. It will be the library using -fno-strict-aliasing that must ensure its dependents don't get bad data.

C and C++ have had -fno-strict-aliasing for years. In 10 years, you know how many times I've hit a bug caused by libraries with mismatched aliasing optimizations interacting? Zero. It's never happened to me. And the new language thing, if that's the objection people have, that's hyperbole. :^)

Indeed. And, at risk of sounding like a broken record, it also has to be implemented by everyone (not just the rustc team). And, again, it may conflict with other things the implementation may want to utilize on that aren't optimizations.

I mean, it really is. You're proposing making an operation defined that is otherwise undefined.

mem::uninitialized is now undefined behavior. Does that mean that all versions of rustc before that happened is not actually Rust? That's essentially what you're saying here, at least that's how it sounds to me. It doesn't make sense to me.

I've fixed it in my comment. Technically, yes, rust can refine operations that are undefined to being defined. However, your proposal forks the language into two dialects, one where mutable aliasing is defined, and one where it is undefined. In fact, C/++ does have this issue, as mentioned, relying on -fno-strict-aliasing means that you can only code in C++ with -fno-strict-aliasing, which is not C++. You're proposing the same thing for Rust, and effectively making it mandatory to support. In both of my perspectives (as a programmer, and as an implementor), you're proposing to make it so that now I have to worry about people relying on this flag. As a programmer, you're forcing me to consider that &mut self does not absolutely guarantee exclusive access, period. As an implementor, you're forcing me to forgo tagging pointers in a particular manner with a particular flag (thus affecting my ability to detect and diagnose undefined behaviour under that flag). And again, you are not actually removing the underlying issue, you are just hiding it away. If someone forgets to enable the flag, then you're code is just as broken as if it didn't exist in the first place (notably and thankfully, it is not possible to enable arbitrary rust flags at the crate level).

1 Like

It's also worth noting: if you're asking for it to be easier to write code that mixes raw pointers and references, and is able to use more safe code and less raw pointers, I think that'd be a great thing, and there are potentially ways the language could help with that in the future. That wouldn't bifurcate the language, the way that allowing multiple &mut references to the same data would.

So, if you're saying you want to be able to maintain the "mutable references are exclusive" property, but have an easier time writing code that mixes raw pointers with mutable references as long as you maintain that property, that's a very reasonable ask.

If you're saying you want to break the "mutable references are exclusive" property, that's a much more serious proposition, and one that isn't likely to happen. That's where you're getting steered towards UnsafeCell, raw pointers, and similar, instead.


That's true, I'll give you that. But the things that actually become possible for me when I disable strict aliasing makes it well worth it.

I see. Okay. Well now I understand some of the push back better, because it can be a pain to deal with things like this. I'm sorry to say I still think it's necessary, but I do sympathize with you.

Here's my perspective. Please forgive me if I sound annoying or offensive, I'm really not trying to be. I'm trying to be honest.

My perspective is this: it's not about you, as rustc implementors, because however much minor nuisance it may cause you to implement, it will save many more Rust users a different source of headaches. To quote Spock: "The needs of the many outweigh the needs of the few". Look, I hope I'm wrong and I'm really, really not trying to attack anyone, but from my perspective, it looks like Rust has an unhealthy and arguably unethical pattern of attempting to control what code its users can write. In my eyes, the purpose of the borrow checker (and arguably of Rust itself) is to stop you from making mistakes, not prevent you from doing something deliberately. And, in safe Rust, your assumption will be pretty much correct. And if the option is not enabled, your assumption will be pretty much correct.

In many years of unsafe programming, I have been grateful for -fno-strict-aliasing on many occasions. It's helpful for certain unsafe operations and can give you peace of mind. The borrow checker will still scream at you if you try and create more than one &mut, but if you accidentally make one from a pointer, it won't be the end of the world anymore. And hey, to quote GLaDOS, "The best solution to a problem is usually the easiest one". Instead of tormenting themselves for days trying to get a safe abstraction for a particularly hot piece of code to allow truly concurrent mutable access, -fno-strict-aliasing can make that a whole lot less painful. I need a real systems language I can trust, not a Google Go ripoff. I want a language that lets me do unsafe things when I actually intend to do them. That's something C++ gets right. What it gets wrong is unsafe by default. That opens up an enormous can of worms that is the reason I learned Rust to begin with.

The reason this matters so much to me is this: Rust seems about to achieve critical mass and become mainstream. At that point, it will be used in work more often (though I already use it at my job), and I'll be forced to deal with the decisions you make here and now. Even if I don't break the aliasing rules, which most of the time I probably won't break even once, having that option gives me peace of mind, and that's worth a lot to me.

Alright, phew! Sorry for the length.

EDIT: One more option: You could only enable aliasing optimizations if the compiler can prove that the reference was not created from a pointer. That would be more than acceptable.

For the record, I do not consider what you are saying to be attacking anyone. That being said, If your definition of a "pattern of attempting to control what code its users can write", is prescribing operations that are invalid (Undefined Behaviour), then C++ is arguably worse. -fno-strict-aliasing does not exist in the C++ language, just in some implementations. However, because rustc arguably sets a standard CLI, proposing this flag does require every implemention to support it, effectively. You say that it is a case of "Needs of the many" but it also affects things the implementation can do. For example, you haven't described how such a flag would interact with const UB checking and diagnostics. In a fictional future version of rust that has C++-like rules for when things must be evaluated at compile time, and a function like std::is_constant_evaluated, it's possible to detect whether an operation is a valid constant expression (in fact, on lccc, which provides access to it's entire set of instrinsics to any code in any frontend, it is possible to do so, though outside of the language, and requires an unstable feature). So, technically, in such a universe, this flag would not only define an otherwise undefined op, it would be observable to a program (IE. the program could detect whether the flag is enabled).