[blog post] Nested method calls via two-phase borrowing

In any case, their model is a kind of generalization of Rust, in that it can accept a lot of programs that standard Rust cannot (it is intended to be used for assigning types to unsafe code as well as safe code).

I feel like there is a misunderstanding here on either your or my side that I should clarify (so sorry in advance for not contributing to the main story directly here, I'll keep it brief): My type system cannot assign types to unsafe code by type-checking. It is true that there are some programs I accept that the compiler doesn't (yet) accept (i.e. around non-lexical lifetimes), but the vast majority of unsafe code is not accepted by the type system, and that was never the goal. However, the type system is sufficiently small and formal that I can define the mathematical contract associated with each type. Then I can "assign a type to unsafe code" by manually proving that the code satisfies the contract. This is useful because it means that the type soundness theorem extends to code using the unsafe code at its given type. That's however not assigning a type to the code in the classical sense, i.e., these contracts are nothing a type checker could have any hope of checking against.

Coming back to the proposal at hand: First of all, at this point we plan to finish the type system we have and write a paper about it, just so that there is something there. Whatever is the solution to this problem, it'll probably not be in that first type system. But of course we may want to amend that later. :wink: Still, reading the blog post, one way of maybe dealing with this that would fit the existing framework fairly well came to my mind, and now I am wondering how close you think that is to your post, @nikomatsakis. I think this thought already lurked in my head when we previously talked about it, but it is much easier to express in your "two-phase borrowing" than it was in "borrowing for the future" or "two-phase lifetimes". Warning, very fresh thoughts ahead. So let's look at this code again:

/* 0 */ tmp0 = &mut vec;   // mutable borrow created here..
/* 1 */ tmp1 = &vec; // <-- shared borrow overlaps here         |
/* 2 */ tmp2 = Vec::len(tmp1); //                               |
/* 3 */ Vec::push(tmp0, tmp2); // ..but not used until here!

So here we have a shared borrow to vec while a mutable borrow is active. That is, on its own, not even a new phenomenon, it can already happen in today's code via re-borrowing. The following is legal, after all:

tmp0 = &mut vec; 
tmp2 = { tmp1 = &*tmp0; Vec::len(tmp1) };
Vec::push(tmp0, tmp2);

I feel like the restrictions for this pattern a pretty much the same as the restrictions you give for two-phase borrows (at least if we allow full NLL), if we just think of the "inactive" phase of the mutable borrow as being a sublifetime for which tmp0 is shared re-borrowed elsewhere. Clearly that shared re-borrow has to end before the next use of tmp0, and only read-only operations are allowed on it.

Of course, the key difference in the first block of code is that tmp1 is initialized from vec, not tmp0, which makes this a fairly strange form of re-borrowing (and we probably would not want to allow it for mutable re-borrows). And of course if tmp0 is based on a field of a struct, it may not be possible to re-write the code into above form. Still I am wondering whether there is some connection here between "not-yet-active" mutable borrows and "shared re-borrowed" mutable borrows.

1 Like

I prefer either desugaring (@RalfJung’s inferred reborrows seem promising), or whole new type (a la @arielb’s). The more magic in the borrow checking / lifetime inference that cannot be expressed directly, the more we risk only increasing expressiveness in a way that’s impossible to abstract over.

With re-borrowing, a new mutable borrow would be fine too:

tmp0 = &mut vec; 
tmp2 = { tmp1 = &mut *tmp0; Vec::push(tmp1, 0) };
Vec::push(tmp0, tmp2);

While this is blatantly unsound with BFTF

tmp0 = &mut vec[..];
tmp1 = &mut vec;
Vec::push(tmp1, 0);
<[_]>::sort(tmp0); // oops, `vec` might have resized

Indeed, I suppose I spoke imprecisely. I did not mean to imply that your type system can "type" unsafe code internals. But I realize that even though we talked about precisely this topic at POPL I am finding myself getting confused, so maybe we can clarify all the moving parts once more (and this time in some place I can search for later).

  • Your system can assign types to fn boundaries.
  • These types include richer invariants, so as to convert things like RefCell and Cell.
  • Custom proofs must be supplied to show that the body of some ref-cell method is "safe".
    • These proofs do, I believe, still refer to and manipulate lifetimes, right?
    • For example, in an unsafe method, we might reason about the fact that we borrowed something, which implies we created a lifetime and used it to bundle up some capabilities, while lending out others
  • Plus there is a "type system" that can be used to show that safe code is "safe"
    • i.e., to prove that any thing which passes the type system will not go wrong

Does this all seem at least not totally inaccurate? In that case, I think what I was trying to say is that the concept of a "lifetime with holes" is a significant change to the underlying logic that those custom proofs for unsafe code is operating in. But please do educate me as to how I am saying this all wrong.

Indeed. I think what you describe is roughly what I had in mind, yes, except that one has to cope with this distinction that you mention here. There is a related example that I discussed with @aturon, where you can see that sometimes reservations might permit mutation that makes the original path irrecoverable:

let mut foo: Box<Vec<u32>>;
let mut bar: Box<Vec<u32>>;
let mut x: &mut Box<Vec<u32>> = &mut foo;
let y = &mut *x;
x = &mut bar;
y.push(...); // pushes into `foo`

Currently, we do not accept this code, which leads to [some contortions around loops] (Permit mutation of `&mut` pointers whose pointees have been borrowed · Issue #10520 · rust-lang/rust · GitHub), but I would like to do so (I seem to recall us talking about this one time, right?). I've not thought very deeply about it just yet.

Yes, the inability to distinguish between borrowing a pointer vs pointee is very annoying. I’m a bit confused how it relates though?

I wanted to show that you can’t always “desugar” a reservation into a shared borrow and then re-evaluating the lvalue path (you might imagine that you can, because evaluating an lvalue path is side-effect free).

(Warning, half a blog-post ahead. I guess I should just make this a full blog post over the week-end, now that I already wrote all this text.^^)

In some sense, I am building two type systems, and I think the confusion here is coming from that. First of all, I am building a "syntactic type system" (usually I'd just call this the type system), which is an attempt to formally capture (an over-approximation of) the safe code that Rust accepts. This looks a lot like most type systems do: There's a grammar of legal types, there are some contexts tracking the types of variables and the lifetimes, there are inference rules telling you e.g. when you can load from a pointer, when you can call a function, and so on. This type system is an over-approximation because I provide more flexibility than rustc does when it comes to lifetimes. However, a design goal of this type system is for every rustc-accepted program not using unsafe blocks (that falls into the fragment I cover) to be (syntactically) well-typed. Notice that the program can call external, unknown functions; this corresponds to type-checking the program in a non-empty context. So, we can derive typing judgments like "If we assume that Cell::new, Cell::get, ... have the type given in libstd, then we can (syntactically) derive that this program is well-typed".

Next, there is a "semantic type system", which I usually call the "(semantic) model of the Rust type system". This model defines, for every type present in the syntactic system, a formal contract that values of the given type have to satisfy. In particular, for function types, the contract says that if all arguments satisfy their contract (as given by the argument types), then the function must execute safely and return something satisfying the contract of the return type. Notice how in defining the contract for function types, we only cared about the contract of the argument and return types -- the "actual types" (as in, their syntactic representation as an instance of the grammar) does not matter at all! This leads us to the idea that really, any contract is a valid "semantic type". This idea is incredibly powerful. We can now write down the contract saying "this is a legal 32-bit integer value representing a prime number". Let's call this type i32even. This type does not exist syntactically, but since we have defined its contract, we now also know the contract e.g. for types like fn(i32even) -> i32even or Option<i32even>. Things compose. So what I do to verify unsafe code is that I define semantic types like Cell<T> that do not exist syntactically. I can then compose these types with things like function types to obtain the contract of fn(T) -> Cell<T>, and then I can show that Cell::new actually satisfies this contract. This is the sense in which my "types include richer invariants", and the sens in which I can "type" unsafe code -- I can semantically type it, but not syntactically. However, since syntactic well-typedness implies semantic well-typedness, the two notions compose together.

This is absolutely correct. However, it may be that "two-phase borrows" also constitute a significant change. :wink: However, "lifetimes with holes" are certainly way further from what my mental model of a lifetime is.[quote="nikomatsakis, post:24, topic:4886"] let mut foo: Box<Vec<u32>>; let mut bar: Box<Vec<u32>>; let mut x: &mut Box<Vec<u32>> = &mut foo; let y = &mut *x; x = &mut bar; y.push(...); // pushes into foo

Currently, we do not accept this code, which leads to some contortions around loops, but I would like to do so (I seem to recall us talking about this one time, right?). I've not thought very deeply about it just yet. [/quote]

Ah, yes we talked about this. I do not understand however how this is related to nested method calls or two-phase borrowing. In my eyes, the only problem here is that rustc thinks x is being reborrowed, but actually it would be much smarter if it was moved instead. Actually, a tiny modification makes rustc accept this program (as has been pointed out to be by Jacques-Henri Jourdan):

let mut foo: Box<Vec<u32>>; let mut bar: Box<Vec<u32>>;
let mut x: &mut Box<Vec<u32>> = &mut foo;
let y = &mut *{x}; // the curly braces make rustc consider this a move out of x, i.e., x is uninitialized after this line
x = &mut bar;
1 Like

If the borrow-checker was smarter, there would be no problem with a reborrow. If you MIR-assign (as opposed to HIR-assign aka DropAndReplace, which is a drop + an assignment) to a value, all borrows of parts of the value that are not behind a pointer get invalidated (because you overwrite them), but borrows of parts of the value that are behind a pointer get "forgotten" - the values can now only be accessed through the borrow result, so we can clear the borrow.

I'd like to propose a third alternative to the two-phase borrow.

Relax the requirement that a mutable reference is the only reference to the underlying object as long as it exists. Instead, only require that there are no other references to the underlying object at the points where the underlying object is mutated.

This is in one sense a generalization of the two phase borrowing. To describe it with the two-phase terminology: Instead of having a reserved borrow that changes into a mutable borrow once the borrow is used, the borrow is reserved all the time, and when it is used to mutate the borrow upgrades to a mutable borrow only for the mutation, and then reverts back to being reserved.

To put it yet another way: a mutating operation invalidates any other references to the mutated object. (Instead of the existence of a mutable borrow invalidating any other references to the object in current Rust)

When you think about it, the reason mutable borrows need to be unique is to prevent a mutation from invalidating any other references. This guarantee can be held just as well when a mutable borrow is guaranteed to be the unique reference to an object when the object is mutated only.

The vec.push(vec.len()); case would be solved naturally without requiring any changes in the desugaring or the order of evaluation:

/* 0 */ tmp0 = &mut vec;   // mutable borrow created here..
/* 1 */ tmp1 = &vec; // <-- shared borrow overlaps, but that's fine as there is no mutation happening now
/* 2 */ tmp2 = Vec::len(tmp1); // shared borrow of tmp1 ends here
/* 3 */ Vec::push(tmp0, tmp2); // shared borrow is no longer active, so tmp0 is again unique, mutation is OK


let mut v: Vec<String> = vec![format!("Hello, ")];
v[0].push_str({ v.push(format!("foo")); "World!" });

example is still rejected:

/* 0 */ tmp0 = &mut v;
/* 1 */ tmp1 = IndexMut::index_mut(tmp0, 0);
/* 2 */ tmp2 = &mut v;
/* 3 */ Vec::push(tmp2, format!("foo"));  // <-- mutation not allowed while tmp1 is active! Error 
/* 4 */ tmp3 = "World!";
/* 5 */ Vec::push_str(tmp1, tmp3);

Other cases that should intuitively work (but fail in two-phase borrowing) also work:

let v_ref = &mut v;

desugars to

let tmp0 = &mut v;
let tmp1 = &v;             // Another reference is fine as long as no mutation is happening
let tmp2 = Vec::len(tmp1); // tmp1 lives until here
Vec::push(tmp0, tmp2);     // tmp1 is no longer alive, mutation is OK
Vec::push(tmp0, 5);

I think this proposal is not only more powerful than two-phase borrowing, it is also simpler to explain. Instead of 'mutable references have to be unique' the rule becomes 'references at mutation sites must be unique', which is almost as easy to explain imo, and simpler than having multiple phases in a reference lifetime.

To work, this would require NLL of course. The definition for 'mutation' in this proposal would be

  1. assignments through the reference
  2. passing the reference somewhere that expects an &mut reference for which the compiler can't statically see what happens to it

The latter includes

2a. passing the mutable reference as a function argument 2b. returning it from a function 2c. putting it in a datastructure or closure that is passed to another function or outlives the local function's scope

So at such function call boundaries an &mut reference would still have to be unique, but within a local scope it would allow a number of use cases that are safe but that are now not allowed by the borrow checker. Things such as shuffling mutable references through local variables and doing the equivalent of a re-borrow that does not have its own dedicated block will be possible.

1 Like

To clarify, the compiler would need to know statically which references may alias a the mutable reference at a mutation site. So in very complex local control flow the compiler would need to make conservative assumptions, but I think that for common code patterns that would not be an issue.

Relax the requirement that a mutable reference is the only reference to the underlying object as long as it exists. Instead, only require that there are no other references to the underlying object at the points where the underlying object is mutated.

Correct me if I'm wrong, but I think we already sort of have this in the form of "reborrowing" a &mut as a &. Even without NLL, this compiles today:

let tmp0 = &mut v;
let tmp2 = {
    let tmp1 = &tmp0;
Vec::push(tmp0, tmp2);
Vec::push(tmp0, 5);

And assuming I understand the NLL proposals correctly, they would allow us to omit those superfluous curly braces and get this:

let tmp0 = &mut v;
let tmp1 = &tmp0;
let tmp2 = Vec::len(tmp1);
Vec::push(tmp0, tmp2);
Vec::push(tmp0, 5);

Come to think of it, what is the advantage of the various multi-phase/nested/time-warp borrowing proposals for desugaring over desugars we already "support" like this simple reborrowing?

Using reborrows is discussed a bit above in post #21 and its replies. I must say I don’t exactly get why that wouldn’t work, but I was assuming that it doesn’t. If reborrowing solves the nested method case as well, why bother with all this two-phase borrowing or changing the desugaring of nested methods at all?

My above proposal (which I’ll hereby call ‘mutation site unique borrowing’ just to have something to call it) is still more powerful than two-phase borrowing, see e.g. the example I quoted of

let v_ref = &mut v;

Thinking about it, the following example would (afaict) not work with only NLL and reborrowing while this should work in two-phase borrowing and mutation site unique borrowing:

let mut vec = vec![1,2,3];
let mut vec2 = vec![4,5,6];
(if condition { &mut vec } else { &mut vec2 }).push(vec.len());

The compiler cannot unconditionally replace the vec.len() with a tmp0.len() (where tmp0 is the reborrow of the mutable borrow), so if this example should work, mere NLL+reborrowing is not sufficient.

In fact using reborrowing to solve the original vec.push(vec.len()) case also requires changes to the desugaring so it is in a sense a variant of the desugaring changes proposals that are also mentioned in the OP’s blog post.

After discussing with @RalfJung, I found the perfect example of why we cannot just unsafe that it is safe to have shared access to the &mut borrow simply because we got back a shared borrow. Consider the existing method get_mut defined on Mutex:

fn get_mut(&mut self) -> LockResult<&mut T>

The effect of this method is to bypass the lock and grant mutable access to the contents. One could easily write a wrapper for this method like so:

fn access<T>(m: &mut Mutex<T>) -> &T {

Not that this function has the signature pattern we're looking for: it takes in an &'a mut T and returns an &'a T. It does not acquire any locks, it simply gives us (shared) access to the contents of the mutex. This function is safe today, because we had unique access to the mutex for 'a and we surrendered that access in order to gain (shared) access to its contents.

However, if we were to adopt this idea that taking in an &mut T and returning an &T implicitly "demotes" that &mut borrow to a shared borrow, that would imply that we could go on using the mutex afterwards:

let mut mutex = Mutex::new(22);
let contents0: &usize = access(&mut mutex);

let guard = mutex.lock().unwrap(); // illegal today, legal under this demotion idea
let contents1: &mut usize = &mut *guard;

Now we have a shared borrow (contents0) and a mutable borrow (contents1) pointing at the same memory. Not good!

So, clearly -- given that existing libstd APIs rely on it -- we need some form of explicit declaration if we want to support this pattern. I imagine that if we (eventually) adopted some kind of type that let you take in an &mut which had two associated lifetimes: one, the mutable range, and the other, a longer, shared range, then we could express signature that way. It'd be something like:

fn mut_then_share<'w, 'r: 'w>(&{'w -> 'r} mut self) -> &'r u32

Here I am writing &{'w -> 'r} mut self to mean "mutable for 'w, then shared for 'r".

I am not a priori against such an extension, but I would want to pursue it after we finish up with the NLL work. It doesn't seem a priori related to me. Moreover, I think there is no particular reason that we would have to solve the nested method call problem using the same concept (that is, it is ok to have an &mut borrow not start immediately -- or to change the desugaring -- to address the problem of the &mut starting too soon, while having also the ability to, up-front, borrow something for a period 'w that degrades into a shared borrow after that).


So in the lang-team, we discussed this post (along with NLL) the other day (“minutes”). Mostly it was a review of their contents, but there was some discussion back and forth about whether it would be to have something like two-phase borrows or just to change the desugaring for method calls – specifically, to evaluate “simple” method receivers after the arguments (for some definition of simple, read on).

A lot of this discussion hinged on what users expect from their evaluation order. It was clear that many people probably have a pretty fuzzy idea about the evaluation order around method calls and so forth. In particular, we came up with this example that kind of gets at the difference:

let mut a = 3;
let mut b = 4;
let mut x = &mut a;
x.foo({ x = &mut b; 1})
// today, expanded to: foo(&mut *x, { x = &mut b; 1}) which expands to:
//     let tmp0 = &mut *x;
//     let tmp1 = { x = &mut b; 1 };
//     foo(tmp0, tmp1)
// with modified desugaring that evaluates the `x` second:
//     let tmp1 = { x = &mut b; 1 };
//     let tmp0 = &mut *x;
//     foo(tmp0, tmp1)

This code is (potentially) legal (modulo rvalue lifetimes) with either the modified desugaring or with two-phase borrows, but its semantics differ (it depends, actually, on whether we allow an assignment to x when *x is borrowed; currently, we do not, but I’d like to change that – I can’t find the issue just now though). The difference is precisely what value foo will see in its first argument: 3 or 4?

Under the two-phase system, presuming we do allow x to be modified when *x is borrowed (safe for borrowed things, in general), then we would see 3. This corresponds to the receiver x being evaluated first. Under a modified desugaring, where the borrow occurs later, we would read the new value of x that was assigned by evaluating the first argument, and hence we would see 4.


So, I’m working on writing up an RFC proposing a variant of two-phase borrows. The specific proposal I have in mind is not intended to be a permanent solution. Rather, it is intended to be a narrowly targeted change that addresses the most burning problem (vec.push(vec.len()) will work) but leaves flexibility for the future. I also aim to minimize the impact on the user’s mental model – that is, I wanted to avoid changes that would dramatically affect the set of code that type-checks. This is in tension with the goal of supporting a simple desugaring strategy, and I chose to sacrifice that goal for now.

The proposal is basically a targeted form of two-phase borrowing, where the phases only take effect when users use the method-call syntax. This would be achieved by adding a deferred mutable borrow form into MIR, where the borrowed value is “reserved” until the reference is first used, at which point the borrow is activated (exactly as I described in my blog post). We would only generate these deferred borrows when desugaring method calls into MIR.

Here are the alternatives that I see, and why I landed on this particular proposal:

  • Change the desugaring to evaluate the receiver last in some cases.
    • Although I think it can be made backwards compatible if we put enough restrictions, changing the evaluation order introduces inconsistencies into the language. For example, you can execute x.foo({x = x1; ...}) today if foo is a fn(self) method, in which case we evaluate the receiver first (and hence has the original value, not x1). But if we change x to be an &mut self method, it would still compile, but instead we’d see the new value. I find this unfortunate.
  • Defer all mutable borrows, as in the original proposal.
    • The original proposal deferred mutable borrows more broadly, but defined the analysis in terms of the MIR. This means that you can desugar vec.push(...) into let tmp0 = &mut vec; ...; Vec::push(tmp0, ...), which is nice. However, it also means that users are likely to encounter these “deferred borrows” when doing experimentation and trying to learn the system, and I wasn’t satisfied with the ‘mental model’ that I thought would result. In particular, I think it would be hard to understand where you can take advantage of a borrow being deferred without also understanding how MIR desugaring is done and so forth.
  • Defer a more broadly defined subset of mutable borrows.
    • The RFC I plan to submit would defer only those borrows that occur as part of method desugaring. We could defer a bigger subset of mutable borrows: for example, all borrows that result from statements like let x = &mut ... or x = &mut .... This is probably what the previous proposal would do in practice: we could actually firm that up if we wanted. The deferral would last until the variable x is next referenced. While I think this is a reasonable and forwards-compatible plan, I felt like I wasn’t sure that this mental model felt “up to snuff” for me. In particular, most of the reasoning about how long a borrow lasts is based on the lifetimes that appear on types, but with this change it is now based also on the path where the reference is stored. This didn’t feel like something I want end-users to see.
  • Extend the type system to understand borrowing for the future, or Ref2.
    • After some consideration, I think that it would be best to (eventually) adopt one of these two proposals. (I am not sure which.) They feel to me like the right way to explain what is going on. However, I would rather do this as part of a more general RFC that also introduces non-lexical lifetimes. I would prefer not to block on this more general RFC, but instead allow us to go forward with solving the immediate pain point of nested method calls immediately.

In any case, I think that the plan I am proposing is roughly theintersection of all the above proposals anyhow. In other words, we could choose to go further and adopt a type-system-based solution, or even a desugaring-based solution, without breaking any code.




This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.