A defer discussion

Prior art

In Asynchronous clean-up @withoutboats offers the do { .. } final { .. } feature to allow executing code on exit, whether that exit occurs on the regular path, the error path, or via panic.

In Pre-pre-pre-RFC: implicit code control and defer statements, @zetanumbers proposes a fairly elaborate solution including many different aspects.

This discussion wishes to focus on "just" defer.

Motivation

As discussed in @withoutboats' post, AsyncDrop is an elusive dream, whereas a surefire way to execute code on scope exit -- no matter why -- could plausibly be one of the building bricks to achieving cancellation in asynchronous contexts.

Today, guaranteeing execution on scope exit in Rust requires either:

  1. Nesting that code -- such as with catch_unwind -- so as to have a single exit point.
  2. Using the Scope Guard pattern, a mainstay in C++ as well.

Unfortunately, neither solution can be said to be practical:

  1. Nesting the code into another function/lambda makes control-flow compose poorly.
  2. While Scope Guard works well in C++, in Rust it runs afoul of borrowing rules.

The functionality the user wishes to express is somewhat trivial, it's just hard to express in current Rust code, without jumping through hoops and writing much boilerplate.

To block or not to block

@withoutboats proposes what is essentially a try { ... } finally { ... } block. It's a well-known construct, however it is typically painful to use: pick either of uninitialized variables or rightward drift.

By comparison, a defer statement is more composable: any number of defer statements may be weaved within the code, at appropriate places. This makes it more flexible, once again relieving the user from jumping through hoops.

Anatomy of defer

Basics of defer

At its simplest, defer is just a way to insert a piece of code so that runs whenever the scope exits, no matter how the scope exits.

Just like the compiler inserts a call to drop, it would insert a call to the deferred action. The one benefit from a built-in language feature being that borrow-checking will know to defer the start of the borrow to the moment the code is injected.

If we think of desugaring, it means:

//  Original source code
let mut some_queue = ...;

while Some(item) = some_queue.pop() {
    defer || some_queue.push(item);

    if item.is_special() {
        some_queue.push(SpecialItem);
    }
}

Will be turned into:

//  Original source code
let mut some_queue = ...;

while Some(item) = some_queue.pop() {
    if item.is_special() {
        some_queue.push(SpecialItem);
    }

    //  Defers & Drops.
    some_queue.push(item);
}

Note that in this example, defer refers to some_queue mutably yet no borrow checking issue arises.

Why did I use a closure syntax? I'll become apparent later, just ignore it for now.

Defer/Drop ordering

Given that Drop drops a value -- making it unusable -- it is evident that defer statements must precede drops, since they access the values.

That is, upon exiting a scope:

1. The scope current defer statements are executed, in reverse order.
2. The scope current values are dropped, in reverse order.

The defer statements are interleaved with the calls to drop as if they were themselves the Drop implementation of an anonymous variable created at the point of the defer statement.

Dismissal

A common functionality found in Scope Guard libraries is the ability to "cancel" a deferred statement.

This is often used to maintain invariants: a Scope Guard is created to restore an invariant, the invariant is then broken, the algorithm executed, and the invariant restored at which point the Scope Guard is superfluous and must be dismissed.

It's notable that this very functionality is already built in Drop, and it thus seems appropriate to simply build it into defer as well:

let guard: DeferGuard<'_> = defer || abandon_hope();

//  break things down, and put them back together.

guard.forget();

Where DeferGuard contains a reference to the "drop-flag" of the defer statement. The most direct implementation would be for the drop-flag to be referred to be &mut bool, but there may be a case for &Cell<bool> or even &AtomicBool.

Result handling

Apart from the flexibility, a key reason to use defer is to execute fallible statements, since Drop is fallible instead.

This leaves us in a pickle: if the defer statement is executed on the failure path, and fail itself, which error should be surfaced to the user?

This calls for the ability to inspect and modify the result -- prior to it being returned -- which a defer can request by simply taking an argument:

defer |result| result.and_then(|| take_action());

The type of result is simply the return type of the current enclosing function (or lambda).

There are multiple options available as to which defaults to pick, and how to handle that. A solid starting choice would be for the defer closure:

  1. To either take no argument, in which case it must be infallible, and return () or diverge.
  2. Or to take one argument, in which case it must return the same type as the enclosing function.

Other choices are available, such as defaulting to returning the error from the enclosing function... but it may be less obvious to the reader, and it can always be added later.

Panic handling

If a defer closure takes as an argument the result to enclosing function... then how is it invoked when unwinding? No such result exists!

This is a pickle, and a possible argument for not allowing inspection of the result, though such would be a missed opportunity.

A reasonable option would be double-wrapping. The result of the enclosing function is wrapped into a std::thread::Result<...> as per the return type of catch_unwind, and that is what is passed to the defer closure.

A facility to resume unwinding -- such as let result = result.unwrap_or_resume_unwind(); -- would help with ergonomics, though let result = result.map_err(|e| resume_unwind(e)).unwrap(); is already possible if a tad clunky.

Async

Since the original motivation of the original post was asynchronous cancellation, the defer closure should potentially be async.

It seems simple enough to allow annotating the closure -- defer async || ... -- and in doing so allow differentiating between closures which require async and those which don't. The desguaring would require adopting the poll_cancel extension of Future, and is a whole other topic.

Conclusion

The Scope Guard pattern is a mainstay of systems programming languages such as C++, D, and other languages such as Go due to its flexibility in expressing "on exit" code, which leads to it being favored over try/catch and co.

The pattern cannot really be implemented as a library in Rust due to borrow-checking issues, hence why a language construct would make the more sense.

Furthermore, as a language construct, it becomes possible to get more out of it: namely inspecting return values as they go.

10 Likes

Rust has scope-guard-as-a-library: it’s called scopeguard. It’s been very useful already, and it supports the sort of “forget” operation you mention here, as well as different behavior on panic or success (this is just an is_panicking check). It makes the guard the owner of data to get around (some) borrowing issues. But it doesn’t do async, and it can’t inspect return values.

One thing it does do is interleave Drop with defer, by virtue of being a Drop itself. I’m pretty sure even a true defer wants that behavior—otherwise you could, say, be running extra code under a lock that you didn’t mean to.

Swift also has defer rather than finally. I haven’t kept up on whether they support async operations in their defer, but Swift has a single async runtime, so they don’t have to worry about unpolled futures being dropped. (Cancellation is signalled cooperatively through task-locals.)

4 Likes

Native defer would be much more usable than scopeguard, because it could cooperate with borrow checking and moves in a way that user code can't.

Currently it's not possible to directly tell Rust that the scope guard may want to have exclusive access to move or use &mut, but that only needs to happen when exiting the scope, not at the point when the closure is created. I assume that first-class implementation would be more seamless than the proxy objects of scopeguard.

11 Likes

Could you showcase how to use this library on the first example -- where the deferred action is to push back the item in queue?

Could you do it if there were two nested defer statements, each attempting to push an item back into some_queue?

As mentioned in the motivation, the issue with Scope Guard is that they run afoul of borrowing rules, because as per the language rules the variables they refer to are borrowed for the entire lifetime of the guard, when really they "logically" only need to be borrowed at the last moment: when executing the actual deferred statement.

let mut some_queue = ...;

while Some(item) = some_queue.pop() {
    let mut guarded_queue = scopeguard::guard(&mut some_queue, |some_queue| {
        some_queue.push(item);
    });

    if item.is_special() {
        guarded_queue.push(SpecialItem);
    }
}

(in this example I’ve assumed item is Copy for simplicity but you could put it inside the guard as well)

For pushing two items you can nest the guards, or use RefCell. I agree neither is ideal, but it is possible. (I did say it only handles some of the borrowing issues.)

I didn’t bring up scopeguard as an argument against a native defer; indeed, I think it very clearly has limitations that a native defer would be able to improve on. But we don’t have to start from zero, and anyone who stumbles across this thread looking for defer should know that there’s a partial solution they can use in stable Rust today that might be good enough for their use case.

2 Likes

Reading this as a pre-rfc, scopeguard absolutely is prior art that needs to be discussed, and also has some basic holes that would be better handled by a language-level solution (for instance, a defer block that borrows two things). It would be nice to see this addressed explicitly early on, since it shouldn't be the crux of this discussion.

It does seem like defer should be in the language and/or standard library. It seems to have many tricky side issues, and maybe can't be implemented "for real" using existing language constructs.

2 Likes

I don't think a language-provided defer needs to (or even really should) support forget-based defusal. Instead, the defer block should be unconditionally run on scope end without need for any drop flags, and the user can write an if dirty check themselves and/or build DeferGuard as a library with almost no limitations compared to a language native guard defusal (and the option to choose between &mut, &Cell, and &Atomic). For the edge case of a defer that needs to conditionally take ownership / not hold its borrows on some edges, the scopeguard approach is sufficiently functional.

If defer is run on unwind (and, if it's to be used for soundness, it should be), there needs to be some justification for why defer async isn't, or, well, I suppose it could suspend the unwind to let the future progress through the defer and resume it afterwards, but that seems prone to accidental unwind suppression (or requires unwinding out of drop, which the project would like to actively discourage and continues to consider making into an immediate abort à la panic while unwinding due to continued soundness difficulties).

Unwind-aware defer inspecting a suspended unwind as Result<T, Box<dyn Any + Send + 'static>> essentially isn't possible, because it relies on Box, i.e. alloc, but language functionality should only rely on core. (Yeah, unwinding requires alloc and probably std, but you can defer without std, surely.) It might be possible to do Result<T, &dyn Any + Send + 'static>.

As a minor note of potential interest, become-based TCO relies on changing locals’ drop timing to before the becomed call instead of after the return. defer needs to be similarly shifted, so defer in a become-using function cannot inspect the return value. defer using the same interleaved drop glue timing as normal drop glue


I think a defer feature probably should acknowledge and provide deliberate space for block output/unwind inspecting/impacting deferral. But an MVP which only provides the unconditional nullary deferral is still highly useful if it has the borrowck integration. So I see it as useful to break this into parts, e.g.

digraph {
  defer -> defer_async;
  poll_drop -> defer_async;
  defer -> defer_inspect;
  defer_async -> defer_async_inspect;
  defer_inspect -> defer_async_inspect;
  async_unwind -> defer_async_inspect;
}

(FWIW I expect "async_unwind", and thus "defer_async_inspect", to be fairly unlikely to happen.)

5 Likes

(NOT A CONTRIBUTION)

I don't have time to engage further in this thread but these are my pretty firmly held opinions.

defer vs do .. final

This is untrue: you can similarly use do .. final blocks can be "weaved" within the code just as much as defer blocks, "at appropriate places," the only difference is this:

Yes! Rightward drift! That's the entire argument as far as I can tell: defer avoids "rightward drift" by not indenting the code that would be enclosed by a do block with do ... final.

I find arguments about rightward drift not particularly compelling. Before structured programming, we had no "rightward drift" at all - everything was written one statement after another. Then, we introduced block structure to more clearly deliminate control flow and lexical scope. Now, the online Rust community will justify any syntactic feature by just saying that it eliminates "rightward drift" - in other words, that it eliminates structure! I feel we've completely lost the plot if "no rightward drift" is enough to stand on its own as an argument for a syntax. GOTO would be justifed by the same argument!

To me, it's simple: if you're writing an algorithm, its easier if you have a block of code for it to appear in a textual position aligned with its control flow. Its easier to see that when a do block exits then its final block runs than to see a defer statement and figure out where it it will run (which will be later, interleaved with destructors of bindings in the same block it appears in). This is an especially big deal if it contains an await: with defer you would need to figure out where the defer block is and remember that to know that there's a cancellation point there.

Dismissal

If you want to "dismiss" a final block, enclose it in a boolean. Having to always read the code to find where the "defer token" was "forgotten" to know if a defer block will actually run (in addition to figuring out where it runs) does not seem like a good feature to me. The only argument here again is "rightward drift."

If you can forget any defer block it is not suitable for ensuring that clean up code runs as in the use cases for scoped tasks at the end of my post. It should not be a part of the contract that defer blocks can always be forgotten. Just use a boolean.

Async

The mention of async in this post confuses me: you can await in a final block (or defer block) if the scope its in is async, there's no reason at all to add some sort of async defer which introduces another async keyword. The main point is that you can await inside the clean up code for async scopes. The issue with async defer blocks is that any await is a cancellation point, so making it the programmer's job to figure out when it happens would be putting a lot of cognitive load on users, without any good justification from my perspective.

11 Likes

Indeed. I am thinking that just wrapping the return type into an Option would be sufficient. If the Option is None, we're unwinding, and the panic will be propagated unconditionally.

I wonder... is it that strange?

It may be technically difficult, but I note that synchronous code can execute arbitrary code during unwinding, including I/O. It may deadlock, livelock, etc... so in sense, the very possibility of running arbitrary computations during unwinding opens the door to possibly "forgetting" a panic.

This one should be easy enough: with defer being a language construct, the argument-version of defer can simply be forbidden in tail-call functions, with a nice diagnostic explaining why.

I see the rightward drift argument as short-hand for "mandatory blocks".

Yes, blocks (and structured programming) are important. BUT the block introduced by a do ... final has two orthogonal features:

  1. It scopes the variables defined within the block.
  2. It scopes the effect.

This mix of responsibilities within a single language feature -- the inability to scope the effect without also scoping the variables -- is the painful part, with the rightward drift being the symptom.

There are work-arounds, of course, but when you get to:

let (variable_a, variable_b, variable_c) = do {
     let variable_a = ...;
     let variable_b = ...;
     let variable_c = ...;

     (variable_a, variable_b, variable_c)
} final {
    // undo
};

//  Use the variables.

Instead of:

defer || /* undo */;

let variable_a = ...;
let variable_b = ...;
let variable_c = ...;

//  Use the variables.

You do feel the sharp pain of this unconditional mixing.

Not always.

You can call defer without assigning the token, in which case you're guaranteed it'll run.

If the token is assigned, then it's a special case.

On the other hand, @CAD97 raised a stronger point:

It's not clear how easily the user could pick their poison. The lack of parametricity is perhaps the better reason to avoid building dismissal, and instead rely on a library (or a simple boolean, for simple cases):

let mut dismisser = Dismisser::new();
defer || if !dismisser.dismissed() { ... };

My thinking was that just because a function is async doesn't necessarily mean that its deferred statements (or final blocks) are.

Further highlighting the few that are async therefore seems worth it to me... but in the case of defer, the async would really be part of the closure (not the defer), just like with any closure. And if closures don't need async, then there shouldn't be any here either.

2 Likes

For me, defer vs final isn’t about rightward drift; it’s about putting the cleanup code next to the initialization code. It does mean the flow of execution is out of sequence, so I won’t deny it’s a trade-off.

4 Likes

(NOT A CONTRIBUTION)

This is true of every block structured control flow primitive. I think you are just making an argument against structured programming in general: "while loops combine two orthogonal feature: looping control flow and a scope in which variables are bound; therefore GOTO is better so you can write loops without defining a scope for variables."

These features are not orthogonal: the point of structured programming is to make it easy for users to understand control flow and the set of variables in scope at any point. Lexical scope and block structured control flow primitives intentionally bind these aspects of the program together in a syntactic way to make it easier to understand.

2 Likes

It's actually not that all difficult to implement, e.g.

async {
    let f = &mut pin!(f.into_future());
    let r = poll_fn(|cx| {
        let f = AssertUnwindSafe(f.as_mut());
        let cx = AssertUnwindSafe(cx);
        let r = catch_unwind(|| { f }.0.poll({ cx }.0))?;
        Poll::Ready(Ok(ready!(r)))
    })
    .await;
    match r {
        Ok(r) => r,
        Err(e) => {
            scopeguard::defer_on_unwind! {
                // cause a double panic abort
                panic!("panicked during async panic cleanup");
            }
            poll_fn(|cx| f.as_mut().poll_drop(cx)).await;
            resume_unwind(e);
        }
    }
}

but, laundry list of caveats:

  • Cleanup code no longer sees thread::panicking() == true. This causes issues around double panics.
  • It makes select!/join! into (soft) unwind boundaries, instead of just spawn boundaries.
  • catch_unwind frames makes unwinding no longer "zero cost" to the nonunwinding control flow[1].
  • Unwinding out of drop glue is cursed; never deliberately do so (except maybe for tests); it may cause aborts.
  • Async contexts cannot use soundness-relevant drop glue (across awaits) for the same reason vec::Drain can't, and must utilize leak amplification ("PPYP") techniques instead.
  • Sync cleanup must be available for impl Drop anyway.
  • Unwinding is for exceptional cases and always indicates a bug. (There's no need to abuse unwinding to hack in cancellation when you can just drop at a suspension point.)
  • Cases that would need to leak to maintain soundness are limited[2].
  • etc.

It's unfortunate that people rarely realize you can, without making bindings mut, do

let variable_a;
let variable_b;
let variable_c;
do {
    variable_a = /* … */;
    variable_b = /* … */;
    variable_c = /* … */;
} finally {
    /* … */
}

It's a very useful technique! This is also different from the let-else case, imho, since the block scope is (by definition, if it has a meaningful cleanup) nontrivial.

Alright now consider

let _ = defer { … };

Which case is that? This is morally equivalent to drop(defer { … }).

Generally speaking, constructs which behave differently in expression and statement position are a bad idea. (Semicolon elision for expr-with-block (e.g. match, if) works most of the time but is super surprising in edge cases.)

This is the true key benefit to defer imo. Compare: Java had try-finally for ages before adding try-with-resources to tie cleanup to the resource acquisition. That's more analogous to RAII, since the cleanup is tied to the resource type, but the reasoning behind it isn't about forgetting to do the relevant cleanup[3], it's about putting the "do cleanup" source directly next to the "create need for cleanup" source.

defer doesn't destructure control flow any more than RAII cleanup does. It destructures control flow in the exact same way. Arguably, it's actually more structured than RAII cleanup since it's irreversibly tied to lexical scope and can't be reordered by moving the value it's tied to.

There is a difference between the existing stable structured control flow constructs and do-finally, though — the control flow blocks for those all run a dynamic number of times, whereas do blocks always run exactly once.

On the other hand, try blocks both do fit into the same shape of exactly one in-order execution and introduce a binding scope. But I guess they still stand apart in that they always are a value-producing expression, whereas do blocks fall into the expr-with-block hole where they're sometimes treated directly as a statement.

A partial argument against tying the scope to the construct: very often the cleanup wants access to state only relevant through the end of cleanup. So nicely structures usage would then introduce two scopes, e.g. I've written code that might be roughly shaped as

let items = {
    let mut ptr = /* raw alloc *mut [T] */;
    let mut count = 0;
    do {
        while count < ptr.len()
            && let Some(item) = iter.next()
        {
            ptr.get_unchecked(count).write(item);
            count += 1;
        }
        if count != ptr.len()
            || iter.next().is_some()
        {
            panic!("bad ExactSizeIterator")
        }
        Box::from_raw(ptr)
    } final {
        if thread::panicking() {
            dealloc(ptr.cast(), Layout::for_val_raw(ptr));
        }
    }
};

The final block isn't logically tied to the end of scope of the do it hangs off of, but the end of scope of the resource it's finalizing, which is necessarily created outside the do in order to a) only execute the final block if it happened, and b) be able to access the resource it's finalizing. That scope mismatch with final is why defer feels like the better structure to me (especially in a language that already endorses RAII dtors, as useless as that moniker is).

do-final isn't really "for" the same "reschedule-able" cleanup that drop glue is for, sure. The inherent scope probably lends itself better to IO-finally-flush shapes, especially since you might want the flush before the containing scope's end, and might even reuse the IO handle after the flush.

... I really need to go read your post properly. (Heads to do that after posting this reply.) You may have covered it already, but:

It would be quite unfortunate if sync-defer in non-async contexts doesn't run during unwind; people will use it as an adhoc drop cleanup, which can be (and is) used for soundness relevant fixups in sync scopes.

Await-using defer in async contexts can be skipped. This is a fact that Rust must live with, that futures can be abandoned and their loans expire without notification.

Combining these, I would consider it quite unfortunate if do { /* no await */ } final { /* no await */ } could skip the final block in an async context on an unwind. Maybe a unified rule is to run it to first await?

The extra concerns caused by suspension are imo reasonable enough justification for syntactically distinguishing await-available versus no-await defer up front.[4]


  1. Non-catching unwind landing pads aren't exactly "zero cost" either since they impede optimization, but they're essentially zero runtime overhead. A catch frame imposes runtime costs. We've minimized them as much as possible, but they exist, and core language features have a higher standard of "zero cost" than anything else. ↩︎

  2. Primarily just completion based progress which lends buffer ownership to a background task, and the reactor can clean up the buffers when it sees the completion wakeup dangle. ↩︎

  3. You can still entirely bypass try-with-resources and there's no way to require it; it's just a finalize call put into the finally block without any type adjustment done to "open" a resource as part of the syntax form. ↩︎

  4. It's the same reason I fully supported .await despite still being fond of "explicit async implicit await" models. ↩︎

2 Likes

(NOT A CONTRIBUTION)

This isn't the semantics in my post. final blocks without await points always run as normal. The caveats appear when you consider final blocks with await points, but these are the same caveats that appear with poll_cancel, discussed separately; final blocks with await points are just exposing the async clean up mechanism that poll_cancel provides to the higher level register of async/await.

There needs to be a new primitive, an async catch_unwind. When you yield Pending in a poll_cancel while unwinding (possibly via an await in a final block), this would unset the panicking state and also yield Pending, similar to your code (some of the caveats you mention would need to be solved with a better implementation, some are not really caveats or sound wrong to me, but I don't have the bandwidth to dig too deeply into this). Runtimes upgrade to use this instead of the way they currently isolate panics to tasks, so that they can isolate panics to tasks but still let the tasks pend while unwinding.

If you yield Pending outside of an async catch_unwind, this probably needs to be raised to a process abort. poll_cancel can only be made useable in runtimes that have upgraded to use async catch_unwind. This is an unfortunate migration cost, possibly there are ways to mitigate it,.

3 Likes

Just a small factoid, but, until recently, I haven’t realized that defer plays exceptionally well with contract programming!

assert!(precondition);
defer assert!(postcondition);
3 Likes

Thinking about it, I am, indeed, making an argument against structured programming as the bottom most layer in general indeed.

I am of the opinion that the user should be given flexibility, and I have indeed used goto before both because it was the cleanest way I could find to express the logic, or because it offered the best code generation. I am not the only one: interpreters and parsers commonly use computed gotos when performance matters.

In that vein, I do find interesting to note that while is NOT the bottom most layer of looping in Rust and:

  1. Rust offers loop: it's useful to have the flexibility to place the condition anywhere in the loop, from time to time.
  2. Rust offers labelled breaks & continues, which are a restricted form of goto.

I do agree that blocks are not as bad as closures -- at least you can still directly affect the control-flow of the surroundings -- but they do sometimes get in the way.

Interestingly, one argument in favor of defer (over finally) is that it allows syntactically binding setup & clean-up close together, so that it's immediately obvious which clean-up relates to which setup, that the clean-up is in place, etc...

I personally find that more compelling, and my personal experience using try/catch is that it's too easy for code prior to the try block and code within the catch to become desynchronized specifically due to the increased distance. In particular, during code review, it's not immediately obvious that a change prior to the try should have been matched with a change in the catch, because the catch is far away and not including in the diff (in fact, the very leading try may not, either).

If you find yourself relying on a comment of the style "and don't forget to align the clean-up code in the catch block" above a setup line, you're discovered the joys of far-away clean-up blocks.

Agreed. Still boiler-platey, though.

My remark on assignment was to be understood as saying that binding the token to a name meant that one was going to use the token later on: a clue to the reader that this particular defer may not run.

Do remember that the token is just a reference to a boolean -- in some form -- so binding or not binding it has all the effect of binding or not binding a reference to an anonymous stack variable: none, whatsoever.

Thus using let _ = defer ...; would have zero effect. Just as using let _token = defer ...; would have zero effect. Well, apart from potentially confusing a reader as to what the author was trying to do, so there should be a style-lint against it.

Anyway, as you mentioned, there's a significant issue in that the choice of &mut bool vs &Cell<bool> vs &AtomicBool is hard to resolve, and such a special case may thus be better left to the user so they can pick the appropriate form for their usecase. Given the questions, it'd probably be best to avoid it altogether.

That's pretty neat, I hadn't thought of that!

2 Likes

We first introduced match as a primitive to Rust and then later added the try! macro followed by the ? syntax.

I wonder if we should consider doing the same here. First, introduce do ... final, follow that up with some defer! macro and finally add syntactic sugar when there is an overwhelming use case.

2 Likes

Actually, by your logic, we should start with defer.

match is the low-level flexible register atop which try! and ? were built.

You could build a do macro atop defer, but you can't build defer atop a do .. final construct.

1 Like

Another potentially interesting observation about async defer — while tying the deferred resource finalization to drop timing is natural, as well as sometimes required for nesting borrow lifetimes, it introduces potentially undesired sequencing between finalization. I.e. consider the difference between the two options of:

// sequential flushes
let file_a = open(…).await?;
defer { file_a.flush().await?; }
let file_b = open(…).await?;
defer { file_b.flush().await?; }
// …

// concurrent flushes
let file_a = open(…).await?;
let file_b = open(…).await?;
// …
defer {
    try_join!(
        file_a.flush(),
        file_b.flush(),
    )?;
}

A definite advantage of do-final is that it makes this nesting much more immediately evident, which is a bit of an affordance pushing people towards doing the probably-preferable join! of finalization in order to minimize the nesting. So the salt of the "rightward drift" is in fact communicating something.

// sequential flushes
let file_a = open(…).await?;
do {
    let file_b = open(…).await?;
    do {
        // …
    } final {
        file_a.flush().await?;
    }
} final {
    file_b.flush().await?;
}

// concurrent flushes
let file_a = open(…).await?;
let file_b = open(…).await?;
do {
    // …
} final {
    try_join!(
        file_a.flush(),
        file_b.flush(),
    )?;
}

At least these primitives permit you to write the join! version if that's what you want. Whereas with async drop you're stuck with the serial order.

This observation actually pushes me some back towards the direction of do-final. Additionally, in the name of MVP (like how we have async and might get generators before exposing the shared underlying concept of coroutines), it can make sense to provide the more limited specialized functionality (do-final) before the more powerful general form (defer) so long as it still makes sense to provide both.

Also, to note on the "stutter" in let (a, b, c) = do {: this shape is already reasonably common in async code, for better or worse: it's required for join! concurrency. So the more fair comparison might better be between these forms instead:

defer {
    /* … */;
}
let (variable_a, variable_b, variable_c) = join!(
    /* … */,
    /* … */,
    /* … */,
);

// or
let (variable_a, variable_b, variable_c) = do {
    join!(
        /* … */,
        /* … */,
        /* … */,
    )
} final {
    /* … */;
};

The difference is significantly less if your baseline usage expectation for multiple outputs to the block is that those outputs' construction will be concurrently join!ed, not sequentially let. As with the previous note about sequentialization, do-final's "salt" for the sync usage is reasonable affordance for the async usage.

Every time defer-adjacent functionality gets discussed, I end up with a similar conclusion — deferring adhoc drop glue is fairly straightforward (especially with MIR borrowck — "just" chain the block into the drop glue), but usage which can yield (await) and/or return is an annoying deep complexity pit to develop a properly predictable/consistent language feature.

1 Like

Between the two proposals, defer and do ... final, I strongly prefer the latter because it has far less mystical behavior. The Rust abstract machine is largely very sequential in terms of ordering guarantees between statement execution (shared with most "C-like" imperative languages). This property is important enough that async/await was invented to address control flow issues that arise with scheduling callbacks in non-blocking code.

If async/await makes non-blocking code look sequential and it affords greater maintainability over callback soup, then defer is a step backwards by reintroducing nonsequential execution and callbacks in a new shape.

Having said that, I do not think I am completely sold on do ... final, either. It's just the better option of the two for making clear the temporal nature of the code that I need to work on.

If I had my druthers, I would prefer a first-class feature within the type system (as in RAII or typestate) so that users cannot "forget to call the destructor" or "forget to add the defer statement" or "forget to wrap it all in a do ... final."

Certainly, this is false. The example in OP desugars to do ... final as such:

//  Original source code
let mut some_queue = ...;

while Some(item) = some_queue.pop() {
    do {
        if item.is_special() {
            some_queue.push(SpecialItem);
        }
    } final {
        //  Defers & Drops.
        some_queue.push(item);
    }
}

Perhaps you have some edge cases in mind where this transformation is not so trivial. But this effortlessly shows that "you can't build defer atop a do .. final construct" does not hold.

I'm also annoyed that both desugarings of this unusual example fix the nonsequential obfuscation with defer.

2 Likes

If you rebracket the whole following scope, yes of course you can do the rewrite in either direction. But what was claimed was that you can't build the "out of order" defer syntax as a macro on top of do final, which remains true, whereas you can more directly approximate do final with a macro on top of native defer support (the macro invocation encompasses only the do final syntax element and not any other code).

1 Like