Idea: non-local control flow

Kotlin has a nice feature that allows to return/break/continue across function boundaries, if these functions are inlined. For example:

public inline fun repeat(times: Int, action: (Int) -> Unit) {
    for (index in 0 until times) {

action is a closure that accepts an integer. The function can be used like this:

repeat(100) { i ->

Because it's an inline function, return/continue/break keywords within the closure ignore the function boundary; this is called non-local control flow:

fun foo(): Int {
    repeat(100) {
        return 4  // returns from foo()

I think that this would be useful in Rust as well. However, we would need labels to indicate that we want to leave the enclosing function:

'f: fn product(s: &[i32]) -> i32 {
    // multiply all values, but return early if the slice contains 0
    s.iter().fold(0, |acc, &x| {
        if x == 0 {
            return 'f 0;
        acc * x

To mark a function argument that allows non-local control flow, we could use an annotation, for example

pub fn repeat(times: i32, #[non_local] action: impl Fn(i32)) {
    for index in 0..times {

The compiler must inline all closures that contain non-local control flow.

What do you think about this idea?


I'm against this because it will turn closures from pure sugar, to something magical. I really like how simple closure implementation is.

We would have to enforce that this new non-local control flow (NLCF) closure can't escape past the lifetime of the function we are returning to. This could be done with lifetimes (by making any closure with NLCF have a lifetime parameter that ties it to the stack). But this would make unsafe code way more dangerous than it already is, you can now trigger NLCF and jump into the middle of some arbitrary function with incorrect unsafe code. How much of a problem is this? I don't know, but it's an edge case that looks dangerous.


Since I said that the compiler must inline all closures that contain non-local control flow, closures with NLCF wouldn't have their own stack frame, and NLCF would act like "normal" control flow.

I don't think so, since NLCF is only possible with the return/break/continue keywords, so you can leave an inlined function early, but you can't jump anywhere. Also, NLCF is opt-in (with the #[non_local] annotation).

It can lead to pitfalls:

fn foo(#[non_local] f: impl Fn()) {
    cleanup();  // this isn't guaranteed to run!
                // You need to implement `Drop` instead

When I learned about this concept in Kotlin, I also thought that it can be messy, but in practice it works really well!

Kotlin's closure-parameters-to-inline-functions are arguably far simpler than Rust's closures. There is no closure object and no captures, just normal code running in a single stack frame. The inline function is literally just copied into its caller, with any closure arguments pasted into their call sites.

This means there is nothing additional to enforce, no need for additional lifetimes, and no impact on unsafe code beyond the impact of a normal early-exit written directly in the unsafe block.

The design is essentially just a usefully-restricted form of macros you can use to factor out chunks of your control flow graph with holes.


So every implementation of Iterator::fold would need that annotation to meet your example?


Good point, I haven't thought about that. So adding the annotation to a trait method is a breaking change :frowning:

The try_fold function already exists to solve this precise example, and in general that's the pattern that should be used in Rust to solve it.

If you really want syntax sugar for it, I think it can be done with proc macros.

Hence, this proposal seems unnecessary, and it doesn't even work as described (the proposed parameter annotation effectively changes the type of both the function and the closure parameter to some new kind of type with unclear behavior, something the proposal doesn't even mention).


What happens if I try and Box::new(action), or stuff it in a global to be called later?

This pitfall already exists due to panicing in closures.

Kotlin is a very different language from Rust, so there maybe things that are good for Kotlin, but not Rust.

I have a few questions related to this.

Is the increased binary size worth the (minor) convenience? You could structure the APIs similar to Iterator::try_fold to get similar results.

How composable are NLCF closures? I don't think that they can be all that composable because you can't really store them anywhere. I.e. They must be used immediately, or passed to another function that uses them immediately. Normal closures on the other hand, are so versatile that they are everywhere, with few restrictions.

I don't see how useful this subset is, but that may just be me.

Also, as @cuviper noted, this plays poorly with traits (where the increased binary size will hit the hardest), which I find to be another good reason not to do this.

1 Like

IIRC, Swift has something roughly similar, but there it's spelled @noescape (Args) -> Ret. (Well, now it's just spelled (Args) -> Ret, as @noescape is the default and @escaping is the opt-in version.)

I don't actually know if they allow control flow to jump out of a @noescape closure, but Swift does show at scale that a @noescape default is tractable. The majority of closures are called within the function they're passed to (at least with Swift API design), so this enables the better, more optimizable case in the common case. (As @escaping closures need to clone GC ref counts, and almost always should capture [weak self].)

This works so well for Swift, though, because @noescape is the default. This is still another "color" for function arguments, and this coloring only really works painlessly when the least privileged is the default, and the more privileged can transparently be used as less privileged. Rust's default is "@escaping", which basically means that providing a "@noescape" option on top is never going to have a real meaningful amount of benefit. I think a @noescape default is a good thing, as it makes it more obvious when closures are the immediately-invoked kind versus potentially long-lived callback functions. (Similar to how I prefer structured concurrency to unbounded thread spawning.) But Rust has picked its default and trying to bolt on a @noescape after the fact is not really worth the effort.

Plus, the Fn family really is "just another trait" for Rust currently, after the magical naming syntax and implementations for closures. Adding the "escaping color" is an entirely new dimension to the type system that didn't exist before.


I think many of these usecases can be achieved with macros. They are by definition always inlined, can accept expressions as arguments, and flow control keywords take action based on the resulting code.

It would even be possible to write a proc macro that was applied as an annotation to a function to automatically generated the macro to facilitate writing this sort of code.

To move beyond functions and do this with methods will require Postfix macros. Which have been discussed before and no doubt would be a powerful if difficult to implement extension to Rust.

1 Like

I think this is interesting from a language design, theoretical point of view.

But what's the practical applications? Admittedly I haven't used Kotlin, nor Swift, but I also never felt a need for something like this.

To clarify: I mean this limited escape-from-closure functionality. OTOH, full-on co-routine support, now that would be spiffy!

1 Like

I think the syntax sugar that allows you to call a function with the last closure argument is nice, since there is less parentheses stacking,

iter.for_each() |it| {  
    println!("{}", it);

but non-local control flow would complicated the semantics too much.

We can already do this with macros, so there doesn't appear to be any motivation for a new language feature here.

In particular:

So as currently proposed, it's little more than an alternative syntax for a tiny subset of macros. I'm guessing the only difference is it type checks the arguments, which doesn't seem like nearly enough to motivate all this.

(These points were hinted at once or twice in previous posts, but IMO not given enough emphasis. The stuff about "closures being magic" and "NLCF is confusing" are important, but nowhere near the instant knockout of "we basically already have this with different syntax".)


One of my common uses for functions is specifically, to use return instead of having to carefully manage breaks. (though I generally find rust handles this well)

I'd want to know for sure I wasn't going to get some return or break going outside the function boundary that I didn't want to.

1 Like

To add another couple data points, the above proposal is roughly equivalent to yield, which is available in nightly now. Replace action(item) with yield item in a generator block, and it should work the same.

More generally, I think this whole topic is a discussion of internal vs external iteration (see this blog post for more details:

Rust previously (circa 2013) had dedicated syntax for this type of non-local control flow/internal iteration. It was replaced with external iteration back in ~2014, searching "Rust external iteration" will give you more of the history.


I don't understand how this is similar to generators. action(index) and yield index are entirely different things. The first calls a closure, the second yields a value.

Other use cases for this include Option::unwrap_or_else, Option::and_then and Entry::or_insert_with.

try_fold with ? is a workaround for non-local returns, but

  • it is more verbose and requires returning a Result
  • it's not feasible to add a try_* function for every higher-order iterator function to allow early returns
  • it doesn't cover non-local break/continue

I'm not saying that Rust absolutely needs this feature (I'm aware of the downsides), I just wanted to discuss the problem space.

Changing the way how return behaves in a closure would be a breaking change. That's why I suggested that NLCF requires labels, so NLCF would always be explicit.

Macros have some deficiencies:

  • They don't compose well, since we don't have postfix macros
  • They aren't type-checked
  • They have limitations for IDE integration
  • They are inlined syntactically, so they can't access private items of the module where they were defined
1 Like

It does, you can pass some sort of error code (via an enum) through the Err case of Result, and use that to decide what to do (return, break, continue, etc.). But yes, it is much harder to both read and write.

I don't see how this is any better wrt composability.

You can do some type checking with code like:

let _: String = $my_expr;

IMO, this is probably the biggest limitation of macros

Macros 2.0 :unicorn: (the macro keyword) will fix this (if we ever get them); visibility based on span hygeine will mean names in the macro refer to things in the macro's def scope. macro defs can be defined almost like a function in syntax, so probably fit this exact use case.

Unfortunately, because of function coloring meaning @noescape Fn() is not compatible from @escaping Fn(), the new function for every combinator would still be required. So adding @noescape versions is less feasible than adding try_ versions for those that don't already have them.

1 Like

previous discussion:

("Tennent’s Correspondence Principle", "TCP-preserving closures")


AFAIK Swift does not allow such non-local control flow. @noescape exists for reasons of optimization.

Needless to say, I am strongly against this idea. It is a footgun that only makes a very niche use case more convenient.