"Tootsie Pop" model for unsafe code

So I wrote a blog post:

http://smallcultfollowing.com/babysteps/blog/2016/05/27/the-tootsie-pop-model-for-unsafe-code/

It outlines a high-level approach to defining a memory model. I’ve shopped this around some but I wanted to get something public written up. I’d love to get feedback.

5 Likes

The refcell-ref example was pretty interesting and fairly convincing that the boundary should be at the module level. I wonder how this will affect how people write code.

Previous optimisation removals with small measured impact have resulted in unexpected regressions, and it’d only take a few users hitting module-wide performance tanking before it becomes a meme that “all unsafe code should be in its own module to prevent runtime performance impact”, e.g.

fn foo() { [...]; let x = unsafemod::bar(); [...] }

mod unsafemod { pub fn bar() { unsafe { [...] } } }

This is probably a reasonable pattern if applied properly, but I wonder about the risk of people copying it without understanding why the compiler optimisations are being inhibited.

Out of curiosity, are there any other ‘action-at-a-distance’ (ish) examples like this in rust where code can affect the performance of completely unrelated code just by existing?

1 Like

I don’t know whether to love or hate this.

Compiler can be extremely aggressive on most code, without fear of breaking unsafe code, and without unsafe code authors having to hold a complex mental model of aliasing rules, because their code is exempt? Sounds like the best of both worlds.

But the idea that I can’t add a single unsafe block to my whole file without deoptimizing the entire file? Ouch. Sounds like it penalizes code that’s less idiomatic about separating safe and unsafe. Reminds me of JavaScript VM performance cliffs (not that the effect here would be anywhere near as extreme as being kicked down to the interpreter, but still). Unsafe code already comes with a significantly increased mental burden over safe; is undefined behavior (which is already a familiar issue from C) really that much extra?

Here’s a question: may the compiler assume that unsafe code accessed from some safe module has been written to be safe with respect to any safe module? For example, if a pub fn in an unsafe module has multiple &mut arguments (possibly with other conditions like actually being called from a safe module), may they be treated as non-aliasing?

2 Likes

That's easy: love it. :wink:

Yes, this is why I am attracted by the idea of finding a more explicit way to demarcate the "unsafe boundary". If we do it right, this might also help in teaching and just be a cleaner model overall.

My opinion is roughly "very much so". First off, I think most people get the rules around undefined behavior in C quite wrong. I certainly don't claim to know them. Secondly, I hope we can be much more aggressive about optimizing than C normally is =)

Keep in mind that what I'm basically arguing for is that you would get the same level of optimization in unsafe code as C gets, unless you ask for more via annotations. I believe this is pretty close to what we do now, which is fairly conservative.

Yes, this is a good question. I have not settled on the answer in my mind yet. I think the way I would phrase the question is like this: if you are inside an unsafe boundary and you call a pub, safe function within the same module, are you required to ensure that the normal permissions hold? Or can you cheat? If you can cheat, the compiler should not optimize.

I could kind of go either way on this. My initial answer was that the compiler would not optimize on said assumption unless you told it to via some annotation, but I've sort of started to lean the other way. This would mean that the model is that the "required permissions" to call a fn are independent of the calling context, essentially.

The counter-argument is that people may have some pub helper fn that, when called from the outside, is still valid, but which they call internally with looser criteria, secure in the knowledge that it will still work.

I expect that the proper pattern would not be "all unsafe code should be in its own module" but "each unsafe abstraction should be". Whether people will be able to follow this I don't know. This is why I would sort of like some kind of way to declare the barrier more explicitly, so that when people carbon-copy some existing unsafe code, maybe it's clearer what the declaration is all about.

I rather prohibit your example but allow:

impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        unsafe { 
            let copy: &mut [T] = mem::transmute_copy(&self) };
            let left = &mut self[0..mid];
            let right = &mut copy[mid..];
            (left, right)
        }
    }
}

Purely from intuition alone, I cannot leave unsafe until I’ve restored the invariant that &muts don’t alias. Now I have written code like your example but always felt guilty about it. Conversely I worry what you proposed “taints” safe code rather unfortunately. I love the idea that I can opt-in to various optomizations on unsafe blocks. I’m less excited about doing something similar just to get the normal optimizations on safe code back.

3 Likes

I know this is missing the point, but for the refcell example, could we use PhantomData to mess with the variance so that BorrowRef drop precisely occurs with the end of the region?

Oh! I would love it if there was some way to get the optimizer do “do its worst”—enough optimizations to utilize everything it is allowed to assume—without regard to weather performance is actually improved. This would be an amazing tool for CI to ensure unsafe is being used correctly.

I obviously haven't thought about this as much as you have but to me it sounds like as soon as you exit an unsafe boundary all invariants should hold. This would make code like split-at-mut-transmute-copy invalid because when it leaves the unsafe block there are two references to the array. Using an unsafe block to return a copied values smells really bad to me an I think it should be avoided in general. I think it makes more sense to surround all the dangerous code in an unsafe block.

Also rust is supposed to be a high performance language and unsafe blocks are supposed to be for people who know what they are doing. I wouldn't want to stop optimizing whole modules to add a slight amount of hand holding to unsafe blocks.As for copy_drop I think it is the same issue. If you are making a reference that rust thinks is valid invalid I think it makes sense to wrap it in an unsafe block.

It might be nicer if we could have a way to declare that boundary affirmatively.

unsafe { } sounds like a good way to do this to me.

3 Likes

Isn't this basically UBSan? (A Rustic UBSan would definitely be a good thing if there isn't one.)

Ok, and finally my overall takeaway. It took a long while for “Lock data, not code” to become understood, and I worry that we could be making a similar mistake here. I think the most of (un)safety can be thought about looking the inhabitants of types—or since Rust is a sub-structural language, we must take the contexts into account too (e.g. aliasing).

First, consider that transmutes and unsafe casts boil down to “extra introduction rules”. These introductions are “extra” in safe code operating on the introduced types is not expected to “work right” when fed the unsafely introduced values. This is probably a pretty common way of thinking about these sorts of unsafe code, I’d assume.

More interestingly is what this means “encapsulation and sealing” are all about----using scoping / privacy to restrict the inhabitants to some subset what it would “normally” be. Modules, privacy, “unsafe boundaries” etc are just the means to this end, and not the end itself. Not only do proposals like “unsafe fields” correctly put the emphasis back on the type with fewer inhabitants, but they also conveniently make functions like copy_drop unsafe even in the module where those fields are in scope. Supporting all 4 combinations of [pub] [unsafe] <field-or-item> nicely separates pub for stability, and unsafe for safety—use both for unsafe things likely to be removed or changed.

2 Likes

We could potentially make the “any mod with unsafe anywhere inside of it is an unsafe abstraction boundary” heuristic a bit more refined by taking into consideration that abstraction is accomplished through privacy, and a mod can only be an abstraction boundary if privacy is actually being employed.

So the rule would at least be “a mod which defines a struct with private fields and contains unsafe anywhere inside of it is an unsafe abstraction boundary”. Perhaps go further and also check whether the type and/or its private fields and the unsafe code actually “come into contact” with each other anywhere, although that seems like it would have a lot of potential for subtlety. And then for unsafe code in a fn like split_at_mut which doesn’t involve any abstract types defined by that module, the abstraction boundary could just be the function itself.

(Am I right to only consider private fields as the means of enforcing abstraction boundaries, and not private items as well?)

I don’t really want to make modules relevant for optimizations - that seems too much like a layering violation, especially because the most interesting kinds of UB are pretty local.

The Ref/RefMut case seems like clear-cut UB, and should be handled as such.

3 Likes

Woah, I had never heard of that. Cool! I think this would be slightly broader in that code with defined behavior is also being modified. For example, even if we don’t want to ever do this on a real build, it should be safe to reorder function calls if the data reachable from the called functions + arguments doesn’t alias.

If the authors of `std` can't write code that's free of UB, then nobody can.
The authors of `std` can't write code that's free of UB.
------------------------------------------------------------------------------
Nobody can write code that's free of UB.

That's not a world I want to live in.

Unfortunately, this model really rubs me the wrong way. In particular, the fact that refactoring across modules is broken and the fact that a single unsafe block suddenly deoptimizes a whole module bother me. Moreover, I don’t see this as a necessary evil - in both of the motivating cases that you brought up, I feel that the code is correct for completely different reasons than what you describe.


Let’s consider the refcell-ref case first, since this is what led you to broaden the unsafe boundary to whole modules.

In this case, you consider the optimization of reordering

let t = *self.value; // copy contents of `self.value` into `t`
mem::drop(self.borrow); // release the lock
t // return what we read before

to

let t = *self.value; // copy contents of `self.value` into `t`
mem::drop(self.borrow); // release the lock
t // return what we read before

and conclude that “reordering this code would be an invalid optimization. Logically, as soon as self.borrow is dropped, *self.value becomes inaccessible.” However, I think that this is the wrong conclusion.

In the single-threaded case, I would expect and want the compiler to do the reordering - it might be breaking the abstraction, but it would still be correct, and it would likely be faster. A lot of the power of a compiler in general comes from breaking abstractions - this is why inlining is so powerful. We shouldn’t bound the optimizer by what, intuitively we think about the accessibility of values - we should instead bound the optimizer by when values actually are accessible.

In the multi-threaded case, you are completely correct that the optimization is invalid. However, we don’t need to put any new constraints on the optimizer to prevent the reordering - drop(self.borrow) will contain a memory fence that prevents reordering the load with it. Thinking about the implementation of a lock, when the lock is freed, some memory has to be written or some message sent that announces to other threads that the lock is free, and that write/message send will have a Release memory ordering that prevents reads from being moved across. Generalizing this argument, it isn’t hard to see that in any case where a piece of memory becomes inaccessible due to interference from another thread, there must be a memory barrier. If there wasn’t, then other threads would have no idea whether the main thread has released access to the memory, since communication could be arbitrarily reordered and delayed.

Therefore, there is no reason to include copy_drop in the unsafe boundary or hamstring optimizations in it.


Now, let’s consider the split-at-mut-via-duplication case. Although I originally thought otherwise, I agree that this code should be safe. As a stronger example, in the current implementation of BTreeMap's iterators, all the mutable iterators store two mutable pointers into the tree, gradually moving one towards the other. In this case, it is impossible to do what @Ericson2314 suggested, wrapping the use of the two mutable pointers in an unsafe block, because the use of the pointers is in the next function, which is necessarily out of the standard library’s control. However, I don’t agree that the whole split_at_mut function has to be in the unsafe boundary - in fact, I’d argue that copy and self don’t alias.

To some extent this is quibbling about definitions - what does it mean for two pointers to alias? However, it has huge effects, and I think the definition that you gave (pointing at overlapping regions of memory) is highly flawed. I would define aliasing as follows:

Two pointers alias if there is a write through one of them to the same location that there is a read or write through the other pointer.

If I remember correctly, this is also the definition that LLVM uses. Note that this is a property not only of the data, but of how the data is used. Fundamentally, I think that the reason duplicating &mut pointers is unsafe is not because the two resulting pointers point to the same data, but the fact that, done in a public function, you can’t know what the user will do with those two pointers, and they might write to both, causing aliasing.

Beyond being more lenient, this definition matches what optimizations the compiler actually wants to make. The compiler uses noalias information to reorder loads and stores, which only requires that other loads and stores don’t get in the way, not that there don’t exist any other pointers to the same location.

This rule for aliasing also reminds me of an annoying rule in C, and how it might be fixed. In C, you can’t create a pointer more than one element beyond the end of an array. This is really painful, and a much nicer rule is to just say that you can’t dereference such a pointer.

From this perspective, self and copy don’t alias, as they are immediately adjusted to point to disjoint memory locations, and they are never used to access shared locations. Thus, the function is safe because at the end of the unsafe boundary (the end of the actual unsafe block, not some far-reaching scope) there are no aliasing &mut references.

6 Likes

Another thing to consider when designing a definition for unsafe is all the places in the BTreeMap implementation with a debug_assert tagged by a comment saying // Necessary for correctness, but in a private module. That is, if that function were to be exposed to the world, the only way for the claim to safety to hold would be for that debug_assert to be changed to an assert.

So the whole concept of “undefined behavior” I sort of view as putting the cart before the horse—first there is some code which is “insufficiently parametric”—any way we choose to compile it, it doesn’t “uniformly” modify the machine state, over the set of valid machine states before the compiled code is entered. Then, because the code acts so un-uniformly we why try our hardest, we declare that it has “undefined behavior” and allow ourselves to compile it any way we like.

The various annotations @nikomatsakis hints at in the blog post to me represent optional assumptions on the set of machine configurations we expect the code to “act uniformly” over. For example, if we want to opt-in to “type-based alias analysis”, we don’t penalize the unsafe block (by declaring its behavior undefined) for its behavior were some pointers of different types to alias after all.

So yes, while it could be that my proposed tool would only catch undefined behavior by definition, it’s also possible to modify the code in ways that are allowed to change behavior given how conservative we want to be, to see if the code is more resilient than what we ask of it.

(A Rustic UBSan would definitely be a good thing if there isn't one.)

There isn't one, and there should be.

To check for aliasing, I suppose you could just have a giant global interval tree of 'memory ranges currently referenced by & or &mut pointers' and check for duplicates. Probably slow as hell (might be possible to optimize somewhat), but it would provide a much stronger guarantee than just relying on the optimizer to "do its worst", which is limited by the granularity of the annotations Rust currently supplies to LLVM, LLVM's ability to prove things, etc.

2 Likes

Hello :smile:

In all seriousness, it's likely that Miri will do some of that work for us.

6 Likes