I really see it differently. If nothing else, I think interfacing to C is going to be very common. I also think that writing unsafe code doesn’t have to be (and shouldn’t be) a “wizards-only” activity.
But more than that, I think we just want to be careful in our assumptions of what we know. The region inferencer surprises me sometimes and I wrote the dang thing. I also worry that if our rules are too tightly tied to the type system we will wind up unable to make improvements and changes.
So, since writing that post (actually sort of in parallel), I have been toying around with thoughts about traces. Actually I was pursuing a model very much like the one you described in an earlier comment:
I agree that in a certain sense this is a very natural foundation. In trying to pursue it further, I definitely found some edge cases that I will try to write up soon. But I do think it imposes some limits on our optimization ability. For example, you wrote that my patsy example can be optimized, but I don’t think so:
fn patsy(v: &usize) -> usize {
let l = *v;
collaborator();
use(l);
}
The problem here is that we do not see a use of *v after the first line. So while we know that v guarantees us a borrow of the data, that guarantee only extends as long as v is live. And the reference is only live until you stop using it. So who’s to say that let l = *v isn’t the point where we stop using it?
Put another way, patsy could be seen as outlining part of my alloc_free snippet:
fn alloc_free() {
unsafe {
let p: *mut i32 = allocate_an_integer();
// fn patsy(x: &usize) -> usize {
let q: &i32 = &*p;
let r = *q; // let l = *v
free(p); // <-- this is the body of collabator()
use(r); // use(l)
}
}
(Of course you’d have to thread the data around and so forth, but I don’t think that causes an issue.)
This may well be, but keep in mind that the goal with the Tootsie Pop Model is not to make just any old function call significant – but rather those that cross over abstraction boundaries. I would like to think that there is a certain intuition about those particular boundaries. (For example, that there is a difference between the implementation of Vec and its users.)
Yeah, that makes sense. On the other hand, my fear is that all of the things which are “common thinking” cannot be mutually reconciled.
I have been trying to shift to the terminology of “unsafe code guidelines” for just this reason. The notion of a “memory model” is one small (but important) part of it.
Absolutely!
Sorry, I think you’re asking me this question, but I feel like I don’t know which unsafe code you meant. =) To be honest, I’m a bit confused by your comment overall.
For example:
It seems obvious that under any model there will exist the potential for unsafe code to create crashes that manifest in safe code. I didn’t mean to imply otherwise. Perhaps my talk of “blame” was confusing? I did not mean to suggest by blame that this is where a crash would occur – merely that I’m interested in seeing if there are rules that can help us to decide which bit of code is wrong (as opposed to UB, which simply states that “something bad happened”). I’m not sure yet if it is helpful to think about blame.
Yes, definitely related. 
I’ve been going back and forth on this. On the one hand, I like the idea of having a kind of “base layer” that is UB defined over traces and ignores fn boundaries and many other artifacts of the source code.
Then you can imagine that unsafe code must layer on top of this some way of proving that this UB will never happen. To claim to be a “safe interface”, this proof must have some modularity to it, to allow for arbitrary safe code to execute (e.g., to invoke the methods in any order, or to take arbitrary (safe) actions in a callback).
But of course some of those arbitrary safe actions that safe code can do might involve other unsafely implemented interfaces. So for there to be any kind of composition, there has to be some notion of permissions – that is, we have to think “safe code can do anything within these boundaries”. This is exactly the logical system that @RalfJung has been trying to model, of course.
But these permissions are, I think, something that we as the language have to be involved with. After all, as I wrote above, they are a kind of “common currency” that unsafe code uses to interoperate. To a large extent they probably follow from the safe type system.
But I am coming around to the POV that it makes sense to have some “base” notion of UB over traces, even if it doesn’t seem to be the end of the story. But of course even that base notion requires thinking about the optimizations the compiler might do and incorporating them. (It’s not like it’s purely derived from the behavior of the O/S and hardware etc)
This is a helpful list, thanks. =) Many of these are already issues in the rust-memory-model repo, but I will try to come back and make sure that all are present (as things to evaluate, iow).
Yeah, I think this is (roughly) the point of the unchecked-get example, and it makes sense. Of course, the point of this post is that many alternative models will wind up being more pessimistic about pointer aliasing EVERYWHERE, rather than just (by default) in the vicinity of unsafe code. =)
I’ll repeat my belief that there should be a way to write unsafe code that is simple to get right, even if it not as well optimized. I am not sure that the Tootsie Pop Model strikes the right balance here – I can clearly see that it has few fans
– but I think that its heart is in the right place. All I mean by “simple way to write unsafe code”, mind you, is that there is some set of rules. For example, I would probably be satisfied with a system where one could say, "if in doubt, use a *mut instead of a &". Then, starting from this base substrate, one can add in optimization by using more Rust types (presuming we have some clear rules for when that’s ok).