Non-lexical lifetimes based on liveness


#1

Hello Fellow Rustaceans,

I recently wrote this blog post that describes some ideas for how to handle non-lexical lifetimes:

http://smallcultfollowing.com/babysteps/blog/2016/05/04/non-lexical-lifetimes-based-on-liveness/

I was hoping to discuss these ideas here! :slight_smile:


Lang team proactive agenda
#2

In this example:

let mut data1 = vec![];
let mut data2 = vec![];
let x = &mut data1[..]; // <--+ live-range: 'a
let y = &mut data2[..]; // <----+ live-range: 'b
use(x);                 //    | |
// <--------------------------+ |
data1.push(1);          //      |
use(y);                 //      |
// <----------------------------+
data2.push(1);

How is the subtype relation of lifetimes defined now (e.g. between 'a and 'b)? And, isn’t this slightly against the principle of “explicitness” – not having a visible boundary for lifetimes (even though it is not all visible even today)?

Also, will this likely improve or degrade the compilation time?


#3

Have you given any thought to how the new lifetime rules affect refactoring?

One point of pain I’ve had with Rust is that in a large refactoring, after pleasing typeck, I would sometimes run into problems with borrowck. In a large project, I have had to slightly reorder large swathes of code to satisfy it.

I am trying to illustrate that because of this staging, the feedback about lifetimes comes rarely, hence these problems are more expensive to fix. And concretely: more flexible lifetimes might mean one runs into problems more rarely (good), or that it becomes harder to predict the borrowck (bad).

My general impression is that the flexibility here outweighs the complexity, so that refactorings will become easier over all, but I thought I’d bring this up as another kind of use case to think about.


#4

I’ll be talking about the subtyping relationship in another post (hopefully in next day or so). It turns out to be a bit more complex than just subsets, in that I think it has to take into account the point where the subtyping relationship is occurring.

Regarding explicitness, the original design was definitely motivated by having a very explicit scheme that could be readily understood. But what I think we’ve found in practice is that people can quickly get an intuitive understanding for the borrow checker rules, but they naturally think about things in terms of the flow of the code. The fact that the region checker then imposes these kind of artificially coarse boundaries hence winds up being more confusing.

I think that the overall plan to move regions out of typeck and instead as a post-pass on the MIR ought to be a big win for compilation time. It should drastically reduce peak memory usage, for one thing, and also make caching far easier. Having more complex regions though is probably slower overall than simpler regions, but that’s not entirely clear and will depend a lot on implementation specifics.


#5

I can imagine this. One thing I would like to do in the compielr in general is to make it more of a pipeline – that is, we would take one function, type check the body, create MIR, do region and borrow checking, and then do that whole set for the next function. This is in contrast to today, where we type-check ALL function bodies, then create MIR for ALL function bodies, etc. The advantage of the pipeline would be avoiding precisely the scenario you describe.


#6

I like that lifetimes are tied to scopes. It makes reasoning easy and I tend to use smaller scopes than otherwise. Limiting the scope of variables is a good thing by itself.

Most of the times it’s trivial to insert another level of scope. The downside is that the following is just ugly:

let mut data = vec!['a', 'b', 'c'];
{
    let slice = &mut data[..];
    capitalize(slice);
}
data.push('d');
data.push('e');
data.push('f');

Using if let would be much nicer, but it doesn’t work with exhaustive pattern (which is probably a good thing).

But something like this would be cool:

let mut data = vec!['a', 'b', 'c'];
with let slice = &mut data[..] {
    capitalize(slice);
}
data.push('d');
data.push('e');
data.push('f');

Regarding problem case #3 from the first blog post, which I think ist the only one where there’s potentially no workaround:

Could we just special-case returns? I mean, a return leaves the function immediately and all following borrowings are not relevant anymore. There are other similar special cases for return, for example a match arm with a return doesn’t have to have the same type as the other arms.


#7

I kind of like the way the current model creates a bug squashing flow. For each general category of error (grammar errors, name resolution errors, type errors, lifetime errors) I get into a flow of solving this specific kind of issue, with a clear indicator of how far I am from my next ‘level up.’ Of course this flow ends up being recursive because a refactor for a type error might cause a name resolution error and so forth.

Not sure that having all of these kinds of errors intermixed would improve the experience. I suppose other people might find it frustrating rather than gratifying to have errors almost reach zero and then rocket up again. I’ve just internalized the knowledge that the compiler hasn’t reached all steps of compilation yet.


#8

I used to agree and this is (one of) the reasons that the system is designed as it is. However, I think you are in the minority in this respect: the vast majority of people I’ve spoken to consider this mostly an annoyance. And certainly examples like problem case #3, where the workarounds introduce inefficiency, arise with some regularity.

Ultimately, I’ve become pretty convinced that something like non-lexical lifetimes will improve the user experience by allowing for a more intuitive understanding of how the type system works, rather than trying to internalize rules based around lexical structure.

Not really. For one thing, as I’ll cover in an upcoming post, one can easily create the same scenarios that arise with problem case #3 within a single function (by inlining a funcdtion body). I’ve taken it as a kind of rule of thumb that if the lifetime rules start treating special-casing function boundaries (or loops or other things, for that matter) it’s sort of a sign that we’re going in the wrong direction, as that indicates the possibility of an artificial limitation or even a soundness bug.


#9

I think what a lot of people want is to identify the “big problems” (those that will force them to restructure their code) before dealing with every little nitty gritty “little problem”. Unfortunately, the way the compiler is setup, the deepest errors (those from the borrow checker) come last, so that’s not very helpful in this respect. But just giving all the errors from one function before another isn’t necessarily the best fix either: what I’d really love to do is a get a better idea of interprocedural flow, which I think can be helpful here too.


#10

Any thoughts on how this could be exposed to tools? Is this information that the proposed Rust Language Server could provide for example?

I really think if rules for what can be referenced becomes more complex, showing information on where a variable is available to the programmer while editing could be very beneficial. But if a Racer-like tool has to basically recreate the entire compiler I think such tools are much less likely to be created (and stay up to date).


#11

I would imagine that, yes, the proposed “oracle-like” tool could provide that sort of information. But more directly I’m interested in experimenting with how rustc itself will format error messages that arise, since that’s something we have immediate and direct control over. I think something similar to our current approach could be adapted, though.


#12

A very interesting read, thank you!

@troplin

One thing that may be reassuring is that is that these non-lexical lifetimes are still lexical with respect to the MIR. In other words they are a static property with respect to the CFG and not a truly dynamic notion. I think this will help mental reasoning.

I tend to use smaller scopes

Well I for one will still add naked blocks to indicate the lifetimes that can be made to coincide with the introduced scopes :).

From the blog posts

You might think we could just remove the rule altogether, and say that the lifetime of a reference must include all the points where the lifetime is used, with no special treatment for references stored into variables.

Hmm, at first this sounded really appealing to me too. But isn’t that really saying that we’re going to try track data flow through assignment? That seems a bit to dynamic. IMO the crux of all of this is deciding how much to forget on assignment.

But in another sense dropck may stop being a special case.

IMO it is extremely important that this happen eventually—a goal not happy side affect.

lifetime as a set of paths

Yes, this is the proper semantic interpretation of lifetimes. On the other hand over cyclic graphs the path space is infinite, so this is not tractable as the only interpretation. The live-range concept from blog 2 gives me an idea for weeding out pathological paths (bad lifetime reentry like in blog too) and keeping the lifetime space finite (no first n times through a loop). Simplify specify the nodes which are allowed to begin, and points that if the path includes is must end on, and take all paths satisfying that.

For types with destructors (or linear types) this is actually easiest—begins on move in, ends on destructor’s borrow or move out. For types without destructors, there is a subtlety that the last read cannot end the path, but rather the point at which it is known the last write occurs. For example, if the last read is in a loop before the condition that breaks out, the path must end after the branch, because end points are mandatory. This may mean interpolating the CFG so paths can end on an “midpoint”/edge instead of just the existing nodes. Finally, note that in this case write of an initialized lvaue is both a path start and path end (with destructors the destructor comes first so there’s and edge of deadness separating the end and start).

In addition to ruling out bad lifetimes and shrinking the lifetime space, this definition also yields a potentially more compact representation for lifetimes.

('a: 'b) at P

Heh, reading about this gave me some Déjà vu, as if I almost reached that that the conclusion that such a thing was needed when grappling with &in &out or linear types some time before.

Under the “lifetime as a set of paths” interpretation, I think this means:

∀ path0 ∈ 'b. P ∈ 'b ⇒ ∃ path1 ∈ 'a. path0 ⊆ path1

which sounds nice to me :).

…fragments…

Ooo! I think/hope this gets us one step closer to solving https://gist.github.com/tomaka/da8c374ce407e27d5dac .


#13

Actually, I like that it is rule-based. I’m a bit worried that the new “rule” will just be: You can do everything that the compiler allows.

Sure it’s comfortable when it compiles but I suspect that it becomes more difficult to reason why a specific case doesn’t work.


#14

There seem to be a number of typos and errors in the most recent blog post.

          A [ map = HashMap::new() ]
          1 [ key = ...            ]
          2 [ goto                 ]
                |
                v
          B [ map_ref = &mut map           ]
          1 [ tmp = map_ref1.get_mut(&key) ]
          2 [ switch(tmp)                  ]
                |          |
               Some       None
                |          |
                v          v
  C [ v1 = (tmp as Some).0 ]  D [ map.insert(...)                      ]
  1 [ value = v1           ]  1 [ map_ref2 = &mut map                  ]
  2 [ goto                 ]  2 [ v2 = map_ref2.get_mut(&key).unwrap() ]
                |     3 [ value = v2                           ]
                |     4 [ goto                                 ]
                |          |
                v          v
             E [ use(value) ]

Please s/map_ref/map_ref1/

Then

'm1: 'v1 – because of B/1
'm2: 'v2 – because of D/2
'v1: 'v0 – because of C/2
'v2: 'v0 – beacuse of D/5

But there is no D/5. Also, I suspect it needs to be C/1, not C/2.

Then

'v0: E/0 – value is live here
'v0: C/2 – value is live here
'v0: D/4 – value is live here

I think this needs to be C/1 and D/3, or I could just be misunderstanding.

Also, I this is the dominator tree you in your post:

  A: let mut map = HashMap::new();
    B: let key = ...;
    C: let map_ref1 = &mut map
      D: map_ref1.get_mut(&key)
      E: Some(value) => value
      F: map.insert(key.clone(), V::default())
        G: let map_ref2 = &mut map
        H: map_ref2.get_mut(&key).unwrap()
      I: use(value)

But it seems this is incorrect. Isn’t this one correct:

  A: let mut map = HashMap::new();
    B: let key = ...;
      C: let map_ref1 = &mut map
        D: map_ref1.get_mut(&key)
          E: Some(value) => value
          F: map.insert(key.clone(), V::default())
            G: let map_ref2 = &mut map
              H: map_ref2.get_mut(&key).unwrap()
          I: use(value)

#15

I’m working on a project in which I’ve had to work around the borrow checker’s limitations a few times.

I just read Niko’s blog posts on non-lexical lifetimes, and I don’t think I fully understand his solution, but I came up with my own approach to solve this problem in a manner that is, I believe, simple to explain. My approach doesn’t need us to redefine what a “lifetime” is.

Let’s start with a short example:

fn main() {
    let data = [1, 2, 3];
    let m = &mut data;
    let i = &data; // error: cannot borrow `data` as immutable because it is also borrowed as mutable
}

&data is disallowed because the borrow held by m is still active. What if the compiler invalidated outstanding borrows when trying to take an incompatible borrow (just like the compiler invalidates variables or struct fields after moving out of them)? In other words, it’s as if we moved/dropped the borrowed pointer (even for immutable borrowed pointers, which are Copy, so we can’t do this explicitly even with a call to std::mem::drop()).

fn main() {
    let mut data = [1, 2, 3];
    let m = &mut data;
    // invalidate m here
    let i = &data;
}

This is valid because we don’t use m anymore after taking the immutable borrow. Note that I’m not talking about changing the representation of the lifetimes associated with the borrows; instead of raising an error about an existing borrow, taking a borrow would simply invalidate earlier conflicting borrows.

Now, what happens if we try to use m after taking the immutable borrow?

fn main() {
    let mut data = [1, 2, 3];
    let m = &mut data;
    // invalidate m here
    let i = &data;
    println!("{:?}", m); // error: m has been invalidated
}

Instead of signalling an error on &data, we’d signal an error on uses of m after &data, with a note accompanying the error pointing to &data (the incompatible borrow) that caused m (the original borrow) to be invalidated. In other words, we’d report an error similar to “use of moved value” [E0382].

Open question: If we wrap the let i... statement in a block, should uses of m after the block still be disallowed?

fn main() {
    let mut data = [1, 2, 3];
    let m = &mut data;
    {
        // invalidate m here
        let i = &data;
    }
    println!("{:?}", m); // is m valid here?
}

Now, what happens if the conflicting borrow is stored in a type that has a destructor?

struct Wrapper<'a>(&'a i32);

impl<'a> Drop for Wrapper<'a> {
    fn drop(&mut self) {
        println!("{:?}", self.0);
    }
}

fn main() {
    let mut data = vec![1, 2, 3];
    let i = Wrapper(&data[2]);
    // invalidate i here
    data.clear();
}

What does invalidating m mean, here? We have to consider when the destructor for i will run. If it runs at the invalidate i here comment, then developers will be surprised that variables are no longer always dropped at the end of the enclosing block. If it runs at the end of the block, then we can cause memory unsafety (in the above example, we’d try to print data[2] after clearing the vector).

Perhaps the sane way to resolve this is to give an error to force the programmer to write an explicit call to std::mem::drop() to invalidate i.

fn main() {
    let mut data = vec![1, 2, 3];
    let i = Wrapper(&data[2]);
    drop(i); // explicitly invalidate i here
    data.clear();
}

The idea is to use the same analysis as for moved values (which is based on control flow) to force borrows to be invalidated. Therefore, it would account for ifs, matches, loops, etc.

This is just a theory, I haven’t proven that it’s correct, so maybe there are situations in which this solution would either cause unsafety or still be too restrictive?


#16

This sounds like a reasonable description of effects non-lexical borrowing might have in straight-line code. But in those cases, most problems caused by lexical lifetimes have pretty acceptable workarounds in my experience (mostly, inserting additional scopes, which is ugly and counter intuitive but rarely requires big refactoring or performance hits). The tricky situation is when only some paths through the code need a (long) borrow. Niko’s first post has two examples of this, termed problem cases #2 and #3. To address these IMHO more serious problems, one needs to reason about the control flow graph and the paths through it — either directly or indirectly (e.g. via dominators).


#17

We actually can’t implement a scheme like this, at least not with the current type system. However, non-lexical lifetimes would result in many of the same effects.

Let me explain starting with your first example:

fn main() {
    let data = [1, 2, 3];
    let m = &mut data;
    let i = &data; // proposal is to have this invalidate "m"
}

The problem is that the current type system has no “reverse mapping” from data to m. In other words, we don’t know the precise set of pointers that might be pointing at bits of data. For example, imagine:

let m = &mut data;
let (m1, m2) = foo(m);
let n = &data;

fn foo(m: &mut Data) -> (&mut u8, &mut u8) { (&mut m.f, &mut m.g) } // assuming some suitable `Data` def'n

Here, the borrow of n would have to invalidate m1 and m2. It’s not that it’s impossible to define the set of things that might be pointing into data, but the current lifetime system doesn’t work that way. It tracks that data is borrowed for a particular period of time, but makes no effort to track how the data in that reference threads its way around. (To make things more fun, it’s also good to think about what happens you include unsafe pointers etc.)

However, the NLL proposal would (in this particular example) achieve the same effect by making the borrow of m shorter (since it is not used later).

Now, there are alternative systems that would work more like what you described. In that system, you wouldn’t assign a reference a lifetime, but instead you would assign to each reference a set of paths – those paths are basically a “points-to” set. So, in that case, the type of m would be &{data} mut Data (and the type of m1 and m2 would be &{data} mut u8). Now when data is reborrowed, we can know what to invalidate.

As part of thinking about NLL I also pursued this line of thinking. I find it appealing in some ways. I’ve been meaning to write up my results. I found that it is largely equivalent, but there are corner cases where you can see different behavior. I believe it would not be strictly backwards compatible, and I think that lifetimes can be more expressive – but I’d like to investigate more. Would be a good topic for a blog post!


#18

I really like the proposal. And I think that especially for new people it’s well understandable. After all it’s like: I’m not using that variable after this line so why should I bother with anything that was used to construct it?

AND the borrow-checker could throw really understandable error messages like: Hey, in line 7 you’re trying to use “data” that’s borrowed out to “slice” which is still used in line 8 and hence alive and locked…

This is IMHO is more approachable than @troplin 's approach that is very elegant from strictness point of view.


#19

I think, the way this “proactive invaldation” strategy would become very challenging is made even more clear by this example:

let m = &mut data;
let (m1, m2) = foo(m);
let n = &data.f;

fn foo(m: &mut Data) -> (&mut u8, &mut u8) { (&mut m.f, &mut m.g) } // assuming some suitable `Data` def'n

Many users would intuitively expect m1 to be invalidated by n, but not m2. This seems like it could easily be undecidable.


#20

One would need to make pessimistic assumptions based on the type of foo.