Stacked Borrows: An Aliasing Model For Rust

Could you explain briefly why freeing the compiler to optimize if there is UB is a desirable goal? I would think it would be better to simply abort/fail to compile for UB. Why is well-defining what UB is, then using that definition to allow for optimizations useful? If something is UB, and I optimize it, isn’t it still UB? How is a program with UB useful?

EDIT: As I think about this more, I think I understand: The “compiler” like LLVM can optimize UB in the LLIR presented to it because it can assume that the higher-level compiler has already ruled out the UB case and that it will never occur in practice, therefore, the LL compiler can optimize “knowing” that the UB will never actually occur even though it can’t statically prove that from the information (structured IR) presented to it. Is that roughly correct?

I’m sure Ralf will have a better answer here, but since I think I understand this stuff I’d like to double-check that I do understand it by writing down what I believe the answers are in case I need correction.

This is a bit of a strange question because license to optimize is the only reason why we want unspecified, implementation-defined and undefined behavior to exist in the first place. I’m not sure there’s a more direct/helpful answer than “if that optimization wasn’t desirable, then that code wouldn’t be UB.”

I think what’s happened here is some confusion over the compiler knowing for sure that a program will exhibit UB, and the compiler making optimizations based on the assumption that UB will not be exhibited. When the compiler actually does know UB is going to happen, that absolutely should be a compiler error (afaik this is totally uncontroversial). But when we ask the compiler to optimize a function that takes two shared references, we don’t expect the compiler to go check all the call sites of that function and prove the references cannot possibly alias, which it likely can’t do in the presence of sufficiently complex unsafe code or external crates. Instead, we say it’s UB for shared references to alias, so it can go ahead and compile the function with the obvious optimizations you’d expect, and the presence of unsafe/external/etc code near one of its call sites does not magically deoptimize anything.

Pretty much the same as the first question. If you have a desirable optimization that can’t be done without declaring something UB, then we can’t actually make that optimization work in practice with other people’s unsafe code unless the authors of that unsafe code know what is and isn’t UB.

Well, yeah. Rust-level UB is different from LLVM IR-level UB is different from x86-level UB. I think we’re mostly talking about Rust-level UB, which is just a property of the Rust source code, not a property of the final executable. How much optimization was done by the Rust compiler has nothing to do with whether the Rust source code had Rust-level UB, unless the compiler is literally editing your source code for some reason. If you meant “does the x86 executable have x86-level UB if compiled from Rust source code with Rust-level UB?” then the answer is “it may or may not; Rust-level UB means the Rust compiler is allowed to do anything at all with it.”

A program that actually exhibits UB is never useful in theory. In practice, it’s only useful if your compiler happens to be acting as if it wasn’t exhibiting UB, either because you configured it to do so or because it’s not fully exploiting that license to optimize yet.

I think this is the correct intuition for why LLVM IR has more complex (and harder to auto-check) rules for UB than we would like surface Rust to have. Though surface Rust still needs some notion of what UB is to justify all the optimizations we want around arbitrary unsafe code.

3 Likes

Note that the “nondeterminism” here is not what a system programmer would call “nondeterminism”.

  1. For a system programmer, nondeterminism generally refers to “environment nondeterminism” - which random value is read from /dev/urandom, which request arrives first on a socket, et cetera. The compiler is not allowed to make arbitrary choices here - if a code has a buffer overflow when it gets requests in the wrong order, the program should still be well-defined as long as requests arrive in the correct order.
  2. For twinned allocation, nondeterminism is something that the compiler can choose - it can place stack arguments in which location it wants, and leave “traps” in arbitrary memory locations, to “catch” code that performs “random” memory accesses. This nondeterminism essentially only happens in theory - if code mangles memory, the compiler can retroactively claim that the address the mangling code accessed was a “trap”, and therefore UB happened. This has nothing to do with “environment” nondeterminism.
2 Likes

I’m trying to differentiate this model from the ACA model (https://github.com/nikomatsakis/rust-memory-model/issues/26), to explore model space.

One significant difference is that ACA has lifetimes, while this model doesn’t. However, I don’t think this is such a big difference as it seems.

Axis 1 - lifetimes

I think the barrier + use semantics are fairly close to making a borrow’s lifetime last as long as it is either live or kept alive by a barrier. This is not necessarily a valid “Rust” assignment of lifetimes - it can violate “artificial” lifetime constraints (in ACA, you could fix a lifetime to anything you want by using lifetime constraints), and “interior borrow constraints”, and of course it is a dynamic, rather than a static, notion.

By “interior borrow constraints” I mean the following:

fn foo(x: &mut Option<&mut u32>) { unsafe {
    let x_ptr: *mut Option<&mut u32> = x;
    let t = &mut *x;
    let x_inner: &mut u32 = match t {
        &mut Some(ref mut x_inner) => &mut **x_inner,
        None => return
    };
    *x_ptr = None;
    use(x_inner);
}}

By ACA, this is UB - the reborrow in x_inner asserts ownership of x for the entire lifetime of x_inner, which conflicts with the write *x_ptr = None;. By the stacked borrow model, t is not live after the match statement, so there is no conflict.

Axis 2 - lifetimes of raw pointer

This is a bit similar to axis 1.

fn foo(x: &mut u32) -> u32 {
    let raw: *mut u32 = x;
    let t = &mut *x;
    *t = 0;
    unsafe { *raw } // is this UB?
}

In the stack-based model, if I understood it right (did I), creating t pops of the Raw from the stack, which makes further accesses from a raw pointer UB.

In the ACA model, this depends on how the lifetimes are assigned, so the situation is more vague. If the lifetime of the raw pointer is the same as x, this is OK, because as long as the raw pointer is alive, x does not protect the memory behind it.

The intent IIRC was that the compiler will be conservative and make it OK.

Axis 3 - stack vs. cactus stack

The ACA model is OK with creating a “cactus” of mutable references, as long as only 1 of them is used in the end. I think that this works less well with the “stacked borrow” model, but that this is orthogonal to the “strict liveness” restriction (i.e., it is possible to create a “cactusy” lifetimeless system).

I think the “cactus” behavior is essential for “unsafe mutability polymorphism”, as in https://github.com/nikomatsakis/rust-memory-model/issues/31

fn foo(a: bool, r: &mut u32) {
    let r1a = &mut *(r as *mut u32);
    let r2a = &mut *(r1a as *mut u32);
    let r1b = &mut *(r as *mut u32);
    let r2b = &mut *(r1b as *mut u32);
    if a {
        *r2a = 0;
// using r2b is now illegal
    } else {
        *r2b = 0;
// using r2a is now illegal
    }
}

In addition to what @Ixrec wrote, may I point you at this blog post of mine?

The short summary is that UB is a contract between the compiler and the user; it is a way for the compiler to make assumptions during optimizations that it would have a hard time determining with its own analyses, so it asks the user to prove them instead.

The term “undefined behavior”, and indeed its treatment in C, seems to sometimes lead to the impression that undefined behavior is about not defining something. That’s not really true. It is important to have a precise definition of which programs have undefined behavior, because compiler authors need to know exactly what they can rely on and programmers need to know exactly what they have to prove! What is “undefined” about UB is what the program does when it exhibits UB. This is very different from defining whether a program exhibits UB.


Good point. This is what I think is usually called “external non-determinism” in the literature.

I agree that this “internal” non-determinism is different, but I disagree that it only happens in theory. It happens in practice, and is very observable, for example with concurrency: The following program may print either 0 or 1, and the compiler is free to optimize it to a program that always prints one of the two things:

fn main() {
  let x = AtomicUsize::new(0);
  rayon::join(
    || x.store(1, Ordering::Relaxed),
    || println!("{}", x.load(Ordering::Relaxed)
  )
}

This program inherently has several ways in which it can execute (independent of what the environment does). The compiler is at liberty to preserve this non-determinism or reduce it, ruling out some possible executions. Many optimizations the compiler performs around memory accesses have the effect of ruling out some executions.
If any of these branches has UB, the compiler can hence eliminate all branches except for the UB one, and then proceed assuming the program has UB.


I am surprised that this is UB. x_inner has nothing to do any more with x, it is a copy (reborrow) of the inner reference, inside the Option. Does this happen because taking a borrow out of x means that x's lifetime has to outlive x_inner?

Correct. This is needed, I think, do get the desired behavior from methods like split_at_mut that “reborrow” through a raw pointer: After let y = slice.split_at_mut(2).0, when slice is used again, y should become invalid just like it would if this was a normal reborrow.

Come again? How are those defined, and how do they behave when casting from and to integers?

:rofl:

Hm, so with Stacked Borrows something like this might work if initiating a reference happens on its first use, not on its creation. (No idea where there is a “cactus” here.^^) But that seems way too permissive to me.

What is the argument for making that code legal? I cannot see the connection to the example in the issue you linked. And I think allowing what you asked for in that issue is in direct contradiction to optimizations we might want to allow, like inserting spurious write to mutable references if we know that does not change the value (could introduce a data race if this is actually a shared reference).

“mutability polymorphism” should be solved on the type-system level, not by weakening what the reference types mean.

1 Like

Non-determinism

The reason that I said that the determinism occurs “only in theory” is because it does not have to precisely match reality “at the lowest level”. For example, if you use your Special Environment Powers, for example a debugger, to find the “canonical” location the compiler used to store a variable, you still can’t reliably change the variable by writing to it (either via the debugger or via a “lackey” in the program).

Similarly, with timing, with a sufficiently smart compiler behind rayon::join, the real-world timing of operations (which might be measured by e.g. timing page faults) might not always match the PL “happens-before” relationship.

For a system programming POV, a program is always embedded in an environment, which satisfies its own interpretation of the “environment specification” (at least most of the time). The mapping between environment concepts and PL concepts is not always trivial.

ACA

Exactly because of this.

lifetimes of raw pointers

That’s the problem with models that rely on lifetimes. The original idea was that the compiler couldn’t assume anything about the lifetimes in unsafe code, but could pick out whatever lifetimes it wanted in safe code, but then as_ptr_mut being safe ruined that idea.

However, if you can define the lifetime of a raw pointer, it doesn’t cause any problem when casting through an integer.

mutability polymorphism

For that, we have to first solve it on the type-system level, then get all programmers to use the solution. If we want to define Rust of today, we have to support “implicit” mutability polymorphism.

The problem is that this would make implementations of split_at_mut that “implicitly” insert reborrows UB, e.g.

fn split_at_mut(&mut self, at: usize) -> (&mut Self, &mut Self) {
    let x: *mut Self = &mut self[..at];
    let y: *mut Self = &mut self[at..]; // this would pop `self[..at]`
    unsafe { (&mut *x, &mut *y) }
}

I agree to the high-level point that the “source” of non-determinism might look different in the mathematical model and in the “real thing”. However, the non-determinism itself – the fact that it occurs – is real, or else we wouldn’t bother modelling it. :wink:

It’s news to me that “happens-before” would contradict real-world timing. How can that happen?

I am afraid this is in direct contradiction with some of the assumptions we want to make, e.g. about &mut really being unique pointers. I am not convinced we have to support “implicit” mutability polymorphism.

Yes, that is intentional. Using a &mut, including reborrowing from it, asserts that this &mut is currently live and hence unique.

It might be possible to weaken this, but this is the first time I hear that we want to allow such code.

Btw, if you want to follow along the implementation of this in miri, we have a topic on Zulip for that :slight_smile:

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.