Pre-RFC: flexible `try fn`


I worry a little bit about writing here, after being chastised for „monopolizing“ the discussion, but I guess throwing an idea in still counts constructive, and that an idea with actual code is worth one more comment. I apologize if this was brought up before somewhere, but I’m not aware of it (if so, please share a link and I’ll cross out the post so it doesn’t distract people).

I like the idea of the attribute and doing it in a library. And I wonder if this could be stretched even further. So, as a mental experiment, could we implement the try block in this way, as a macro? I know the try block is in a merged RFC, has a keyword reserved, but it hasn’t been stabilized yet, so there’s at least theoretical chance to still play with it. Naming in the examples is explicitly stupid to not suggest anything (eg. bikeshed later, after deciding if this could actually fly).

I don’t think I can do exactly the try block semantics, but I can do a block that stops all early returns:

// Yes, only a lambda that is called right away, nothing fancy
macro_rules! stop {
    ($( $body: tt )*) => {
        (|| { $( $body )* })()

let x = stop! {
    let a = 2;
    let b = 2;
    if a > 0 {
        return a + b;
assert_eq!(x, 4);

This one does not ok-wrap the final statement (yet, see below). This on one hand makes it more verbose (and needs the Ok(())), on the other hand, makes it work for non-error-handling situations too (like, if I have something like nom 3’s IResult, which has 3 variants, or iterating through something and early-returning first matching value and providing fallback after the for). But it does support the use case of trying to run a series of operations and getting a result of the whole, which is AFAIK the main motivation of try block:

let result: Result<(), Error> = stop! {

Now, because some people do want ok-wrapping, let’s implement that too:

// If we want it to run on stable, we can use Ok
// there instead of from_ok, at the cost of flexibility.
macro_rules! wrap {
    ($( $body: tt )*) => {
        (|| { ::std::ops::Try::from_ok({ $( $body )* })})()

fn do_stuff() -> Result<(), ()> {

let y: Result<(), ()> = wrap! {
    // See? No O(()) here.

Pros & cons

  • (+) It is very minimal and low-cost. It can be put into a crate. It can be used right now (or, after someone comes up with reasonable names and puts it onto, without an RFC process, support in the compiler, etc.
  • (+) It is more flexible by supporting other use cases than just error handling.
  • (+) Due to the low cost, both wrapping and non-wrapping variants can be supported.
  • (-) The macro syntax is less visually pleasing.
  • (+) It would allow experimenting with it first and then promoting one (or both) of them to a keyword, if it is deemed it is used enough. Furthermore, there was a contentious discussion if ok-wrapping is a good thing or not. The decision to make it wrapping was made, but on predictions. This experimentation could provide some factual numbers, which are generally a better indicator. On the other hand, it is in a sense a solution by postponing the problem and could lead to re-opening old discussions.
  • (?) The biggest difference is that return from within a try block returns from the function, while this returns from the block. It’s not possible to exit the outer function from within the wrap! macro. One one hand, this is a limitation. On the other hand, it makes the need to have throw/fail/pass control flow keywords go away (if I understand it right that breaking out of try is the main motivation for these) ‒ the core language already supports everything needed to make it useful. And I personally believe it would make thinking about flow control in the function easier, that the try's property of being able to return to two different places is in the general direction of goto. I don’t know how often one would want to exit the outer function from within the block.


  • While related to error handling, like the original proposal, and it got inspiration here, I don’t really want to hijack the thread if it would be considered inappropriate. Should I move it to a separate thread?
  • Are there any further pros & cons I’ve missed?
  • Is the difference in not being able to return from the outer function important?
  • Does it sound like a possible step forward around error handling to anyone?
  • If the answer to the above is yes, would it make sense for some of the major error handling crates (failure, error-chain) to adopt it? I can put up a crate myself, but if it got included in one of these, it could gain more popularity and the experiment would be more valuable.


Quoting the context here just because I wasn’t able to respond quickly

To be clear, I was not criticizing this proposal to the point of saying it leads to incorrect code. I was critiquing the assumption that error handling code is different from any other code. “Error handling” is not a problem domain that can be solved in advance. You have to look at each program to know what is an error and how it should be handled. So “error handling mechanisms” are never more than just control flow mechanisms that may be used for error handling.

Well, no, you’re not talking about it. My point was that I wish we were, because it seems to me that it would better interact with the main semantic feature of “error handling” code: that we often don’t care if it performs poorly. But still, we don’t know if that’s true in advance. Some projects do need high performance error handling, which is one of the reasons C++ - style exceptions are a bad idea. Not everybody can use them, but everybody still has to reason about them.

The relevance to this proposal is that it is again a thing that not everybody will want to use, but they’ll still have to deal with. (Really only if we’re going to declare it idiomatic and lint on it – personally, I find the proposed style less readable and arbitrary, so it would sadden me to see it declared idiomatic.) If we’re not doing any of that, then I’d be fine with just never using try fn.


I don’t see how anybody could do better than return Ok(value)

Just because non-locality exists doesn’t mean more of it is acceptable. The issue here isn’t non-locality (in my opinion), but ambiguity. Today neither of return Ok(value) or return value are ambiguous. Adding ambiguity to return would make almost all control flow harder to reason about. A new keyword works around that, but at the expense of breakage and arbitrariness. So it should be very difficult to come up with an auto-wrapping proposal that doesn’t lead to significant discontent.


I was not sure I was grasping how stop! stops early returns, @vorner confirmed the following example is correct (see

fn f() -> usize {
    let x = stop! {
        let a = 2;
        let b = 2;
        if a > 0 {
            return a + b;
    return 2*x;
assert_eq!(f(), 8);


With label-break-value it should be possible for a macro to handle control flow however it likes. (Probably with a proc macro and an AST visitor…)

Naming collisions aside, it could implement try! { .. } that stops ? propagation by desugaring ?s, that passes through/stops/forbids return, that implements any/all of throw/fail/pass/succeed, that does or does not include Ok-wrapping in any of the locations in question.

Maybe we should throw together a configurable macro that lets people try out their favorite flavors.


One of the major problems with proc-macros like these are they don’t compose with other macros well. For example you say “passes through/stops/forbids return”, but you won’t see any returns hidden behind a macro e.g. the old try! macro or the common bail! macro.


I have to admit I found this comment a bit frustrating, because the idea that infinitely accruing new features has ultimately negative consequences is not exactly a revelation - we’re all very aware of it. The argumentum ad C++ has been made many times before. I’d love it if we could, as a community, move past this framing (just as wish we’d move past “explicit is better than implicit”), into a more nuanced analysis at how languages grow over time and how we can prevent a situation in which the language becomes dauntingly massive. All of us are concerned about this, can we shift the conversation to one which produces new insights?

Here are a few notes of my own toward that end (in no particular order, making no unified argument):

  • Languages change primarily in response to user needs. Backward compatibility means that language change is necessarily monotonic growth.
  • Different users have different - often conflicting - needs based on their different relationships to the language. We must balance these users needs against one another, and be empathetic to users with different needs from our own.
  • Rust at 1.0 was necessarily the smallest version of Rust that will ever exist from now on.
  • The few languages that have grown almost none at all since being released did so by narrowly limiting who they are for (I’m thinking of Clojure as the best example of this; Go to some extent, but I see them as only halfway into this philosophy). The use cases that Rust is targeting are too broad to simply discount most users like this.
  • The 1.0 release established the core identity of Rust (i.e. ownership, borrowing, traits). But it was understood at the time that it was a skeleton of the language Rust planned to be. Rust has been proved out, it is now in the process of being filled out.
  • The current phase of Rust development (“filling out”) will someday end, and a new phase will begin. The big question is when have we “filled out,” and what will come next?
  • More than simply accruing features, the real challenge languages like C++ face is trying to shift their core identity without breaking changes (i.e. from “C with classes” to “safe C++11” and beyond). This seems to be the cause of much of C++'s negative reputation - you’re not supposed to use half of it.
  • If an aspect of Rust’s core identity comes to be seen as wrong, what do we do? Do we make a major breaking change (Python)? Do we backwards compatibly shift our identity (C++)? Do we just accept it and maybe start working on new languages (who has done this? perhaps C?)?
  • Billions of lines of code and a reputation as a pile of junk is the best outcome any language has ever had, if we’re optimizing for impact on society or the industry. Can Rust really hope to have a better outcome in 2050 than C++ has in 2020? Should we really feel disappointed if it doesn’t?

These aren’t meant as challenges to you in particular, and I’d ask you not to respond to them as such. These are open ended statements and questions to try to dig us deeper into this question and past the reductive “don’t be like C++” narrative that I see the conversation currently ending at.

Fortifying the process against feature bloat

The way I read the post was a complaint about process.

Python has the advantage of having an extremely clear “zen of Python” as well as a “benevolent dictator for life” who is the final authority. This creates a language that has a very clear sense of itself from the start, something that Rust does not really have (yet).

Because Rust does not have this but is rapidly gaining popularity, it is probably reaching a bit of a “land rush” phase where it is extremely attractive to people who like to tweak syntax and language features. This is a good thing, in many ways. It means a lot of options are being discussed, and this is critical for ending up with the best implementation.

But the way the current RFC process is structured, I’m not sure there is a good forum for people who prefer the “do nothing” option. Thumbs-downing a lot of RFC’s and forum threads is content-free and not helpful. Commenting on a proposal about how it goes against what you feel should be the “zen of Rust” is derailing and requires unpacking a lot of context that is probably offtopic for many people who want to discuss the details of the proposal. And finally, people who like the language as is don’t want to hang out on the RFC repo all day commenting.

Perhaps it might be a good idea if there was a more structured process for throttling RFC’s (like the impl period, but with this explicitly being a stated goal), such as shelving a decision about the desirability of any and all syntax changes until a certain period in 6 or 9 months. People can still submit and refine proposals, but they’ll know that the decision about desirability is yet to be made and the more conservative people can rest easy that things won’t suddenly be in nightly and half the ecosystem switches over before they even notice (yes, I’m being really hyperbolic here, trying to make a point though :slight_smile: )

The thing is, I’m not sure this will help. The default match bindings proposal was controversial as well, and it’s not immediately clear to me that this would fall under a “syntax change”. On the other hand, async/await has pretty broad support but involves syntax changes. Trying to impose too much structure on RFC timing might delay critical features (or lead to a lot of ecosystem churn if they are released in phases), or create unwarranted pressure to release features half-baked.


That is a stated goal of the impl period:

In particular, we are effectively spinning down the RFC process for 2017, after having merged almost 90 RFCs this year!


I think Boat’s Not Explicit or Aaron’s reasoning footprint posts are great examples of ones that aren’t about a proposal at all, but about how to evaluate proposals.

For example, from the former, needing to write Ok( repeatedly sounds like “Manual” to me. And from the latter, it’s low-impact on Applicability (there’s a “heads-up” – try – that it’s happening), Power (still type-checked, only applies to the final value, no control flow changes, just a wrapping), and Context-Dependence (the only place to look is the return type, and using a different type there – like Result<T> vs Option<T> – doesn’t actually change what you can write in the function body with try any more than it did without it).

I’d be happy to read a blog post that introduces another such metric and shows how try fn doesn’t follow it, but other existing choices in Rust do.

The problem I had with post 35 is that even where I agreed in the abstract, I didn’t agree in the specific.

For example, I agree that there shouldn’t gratuitously be two ways to do the same thing. But with “There are these 2 very differently looking ways, which are basically the same thing under the hood” I think isn’t ? another way to do the same thing?, since the book even teaches it that way. Ditto basically anything in a “dialectical ratchet”.

And I agree that “auto-conversions are a [big] source of bugs in C++ programs” – they’re even mentioned as bad in Aaron’s ergonomics post – but this isn’t a coercion. That approach used to be desired, but was found to be problematic before even being proposed as an RFC, and since then the proposals have all had visible markers for “this is happening” (not even might be happening – ie for an Option it’s Some(x), not Option::from(x)).

Well, #2107 and #2120 were both postponed with “the lang team is very interested in pursuing improvements to Result-oriented programming along these lines […] after the impl period”, so it’s hard to blame anyone for bringing it up again now.

And I think the fact that different people keep coming up with proposals in the area over the course of multiple years is evidence towards there being a problem worth solving. What’s more, every time the proposal has been different. That’s not at all “Are the authors just hoping it’ll get through this time?”. The “loud opposition” just makes it less likely that someone would be willing to re-open the discussion without feeling strongly, as boats has elaborated before:

What’s really unfortunate is that what this thread demonstrates to me that when an idea is at the “30 minute sketch” stage, if I suspect it will be at all controversial, I can’t post it on this forum or dealing with that thread will be what I am doing all week.

There are plenty of topics that are opened, discussed, rejected, and then never come back up. This doesn’t seem to be one of those.

For another thing that came back valuably, see dyn, which, as I posted recently, went over 15 months from one of the most :-1:'d RFCs in the repo to one with broad support, despite only minor changes in approach.

Javascript apparently calls that an IIFE. It may be worthy of a macro even outside the domain of error-handling, since I’ve felt sad every time I’ve typed it.

You’re always welcome to make a new proposal. It’s probably worth defaulting to a new thread because discourse quickly gets unwieldy as they get long. I’m in favour of having concrete alternatives even when they’re to my own proposal.

Fortifying the process against feature bloat

I apologize for that. The post wasn’t formulated all too well and not in the best context. I was frustrated when writing it and I guess it is contagious.

I didn’t really mean to say (very loosely paraphrasing) „we are all doomed!!! :frowning: “ but more like „Isn’t there a way to fortify the process so it makes it easier to avoid the fate?“ I think the idea of no-feature RFC is worth exploring and I have some few more little ideas about that. You’re right on the point that knowing about the danger is one of the best defences, but it’s not necessary an externally visible one ‒ it may work against the bloat, but doesn’t work all too well against the fear of bloat. And it also requires energy to use, which could be used somewhere else.

Also, wanting to improve something isn’t necessarily a critique :innocent:.

And, maybe my own feeling is that Rust is approaching or has almost approached the phase of „Filled out“.

Anyway, I think this doesn’t really belong in this thread (edit: For clarification, here I mean my ranting about RFC process ‒ I think it could be an interesting discussion, but not here. It is „meta“). So, how about: let’s stop this meta-discussion here for now. I’ll try to let the ideas I have ripe and formulate them better than I did here and I’ll open a new thread (with more constructive wording) about it.

And thanks to the others for pointing out the blog post wouldn’t have to be primarily about error handling, but about the deeper visions and reasoning about them ‒ it didn’t click at the time.

Fortifying the process against feature bloat

I just want to add a small vote of support for accepting try {..}-blocks #2388 while postponing throw/fail #2426.

I personally think that try {..}-blocks can be very handy in some situations, because they save the user from defining an extra function, but it could be great to see this feature in real life code bases before discussing throw/fail. Will the feature be used? Is Ok-wrapping useful? etc.


I find the term “Ok-wrapping” unfortunate, since it inherently frames it from a perspective where not doing it is the default, giving it the appearance of a weird implicit behaviour. The opposite framing, that not doing it is "implicit tail ?" is equally if not more valid, since it frames try as the direct inverse of ?, taking an expression producing a value which potentially branches with an error and turning it into a value representing both the success and failure cases. Success and failure values are both being wrapped; referring to “Ok-wrapping” as a stand-alone feature conceals the fact that not doing it is asymmetrical.


I find it interesting that Python is often mentioned like that. Python has a lot of baggage and a lot of weird things (like being a “batteries included” language that has no ISO 8601 dates in the stdlib…).

Ruby is also BFDL, which turns to a lot of oddities. You know that thing with “the principle of least surprise”? That surprise is explicitly Matz Surprise (“The principle of least surprise means principle of least my surprise.”)

I also don’t believe that BFDL improves things. The way C++ went has nothing to do with having a committee, it has to do with being a language from the 80s. Many ideas that sounded good back then turned out to be bad. We can avoid those mistakes.

We’ll make our own instead. We’ve already made some.

Java is also design by committee and did turn out an extremely coherent language.

BFDL is an old-fashioned process of the 90s. Rust wouldn’t be better if some of the leading figures where crowned king and having final say.

I’m one of the “wait a little” crowd, regularly. I have discussed in a couple of RFC (especially around the module path changes) to not make them (at least the first couple of RFCS). Be reminded that module path changes are a thing that were struck down multiple times before the RFC was accepted. These opinions are heard.

People are too attached to the RFC process directly. Yes, the RFC process gives no venue for overarching discussion. Because its focused on details. Also, if you want to work on an RFC level, yes, you totally have to make the time. But there are other things, like this forum.

Also, we do make offers for people that don’t have time for RFC and encourage them to write about what they want to see in the language: The #rust2018 blogging campaign did that. Turns out, many people want to see changes. Not many people asked for a cooldown.

If you have a different plan for the wider evolution of Rust, everyone can write a long form text here or at your blog. People listen and take that stuff into account. Just be aware that not following your sketched path (in full) is also always an option :).

We are very aware of many problems the RFC process has.

I don’t find @withoutboats comment derailing, it lands precisely at the right spot. Its point is not “don’t say we should do this”, its point is “this is a very cheap argument, frequently made, that we can’t interact with”. It asks for nuance. Pointing to C++ and saying “accumulating features is bad” doesn’t help us. You can’t interact with that - it would ask for freezing. Pointing out how this feature might not improve things enough to cover the implementation cost helps us. That’s something that can be interacted with.

Fortifying the process against feature bloat

I wen’t back to this and kept thinking. I have to say: thank you very much for the link. Why? Because this is the first instance of problem statement in this thread I’ve managed to find (a bit vague one, but good nevertheless). I really believe a problem statement helps a lot both in having meaningful discussion and for finding good solutions.

And after reading it, I found out I have to change my mind in regards „There really isn’t anything to fix around error handling“ I believed in before.

The described situation (paraphrasing a bit): „I have a function prototype, full of .unwrap() and now I want to do it properly with error handling. If it was java, one would just throw an exception.“ I searched my mind, tried to imagine a scenario where it could go wrong:

  • I change the return value to Result<Whatever, Error> (cool, we have failure crate, it takes care of wrapping any kind of error, just like exceptions do), replace all .unwrap() with ? and wrap the last thing in Ok( ). Compile… meh, there’s a return, so that one to wrap too. That wasn’t that hard. Somewhat mechanical (:thinking:could RLS do that? Or cargo somethingsomething?), but easy, let’s move on. Not really worth crying about much for me.
  • Compile… Crap, someone actually calls that function. If I’m lucky, it’s in a function that already returns Result, so just adding ?. If not, another level of ↑. Yep, this is getting a bit tedious, maybe I should do it after lunch when I don’t have the mental capacity for anything better anyway… but I should at least look at these functions if the new error handling doesn’t break anything.
  • Oh nooo, I call it from somewhere in the middle of a very long method chain. I have to rewrite the whole thing, because now Results are everywhere. How do I early-exit the iterator if there’s an error? Would it be better written as explicit for? git reset --hard, you know, the errors don’t happen that often in practice anyway…
  • [Alternative]: Oh noooo, I need to call this function from a callback I pass to something from another crate. There’s no way I can get the Result through that, how can I extract my errors out? I guess I’ll just have to place the unwrap() at the top-level of the callback.

My take from this:

  • Yes, there are pain points worth fixing in Rusts error handling.
  • Do we already have collected, or should we, collect some scenarios and stories of when the error handling is really annoying? I mean, Ok(()) pops up a lot, but doesn’t seem to be the biggest pain point, and several people hinted at such. It’s not nice, but IMO probably not worth some big global changes. A constant (std::prelude::DONE)?
  • I agree this is easier in Java (or something) where I just throw an exception, catch it 7 levels up in the call stack and deal done. However, I think this is ergonomics of the exception semantics, not their syntax. Replacing return Err(whatever) with throw whatever doesn’t really bring me much, at least not in the above story I could come up with. I don’t know if exception syntax is more ergonomic or just a byproduct of their semantics (because exceptions just need special syntax). It might make the step 1 slightly faster (maybe, maybe not, but for the sake of the argument, let’s say it does), but doesn’t solve the real problems.

(And no, I don’t have a solution for the problem highlighted here)


Unless you are calling the function from a method handle on a streams method (or some other context where checked exceptions weren’t accounted for).


@vorner It’s worth mentioning that this problem of “viral error types” is why I’ve always been a big fan of the “enum impl Trait” idea in its various incarnations. While that doesn’t help much in the 0 errors -> 1 error transition, it helps immensely with any M errors -> M+N errors transition.

I’d also like a targeted solution for getting rid of Ok(()), without any other wrapping or scoping sugars, just as an extension to the already special treatment of (). I should probably write an actual proposal or blog post about this at some point.

I currently am not aware of any proposal that, in my opinion, makes a big difference in the no errors -> any errors transition. I think some have claimed that try{}/throw/etc help with this case, but that’s something I’m skeptical of, and would need to see a little bit of evidence for. Unfortunately every time I’ve asked a fan of try{}/throw/etc to dive into that claim, I didn’t even get a toy example where it made a significant difference. So if anyone knows of an argument that’s any less subjective than “I think it would help”, please bring it up, because I’m becoming convinced that there is no such argument.


This specific situation is not really a pain point with Rust’s error handling though, is it? It might seem like it is because this sort of thing is usually not painful in other languages. However, I think it’s more like ownership and borrowing. In a language with a less powerful type system, where there is no ownership or borrowing with the RWLock pattern, it might well be “easier” to write certain kinds of code and get it to compile, because the compiler doesn’t constantly complain about moved-from lvalues or simultaneous mutation and aliasing. However, this is an illusion: chances are that in those other languages, the code which compiled but of which the equivalent wouldn’t compile in Rust was simply incorrect, and had the wrong semantics!

The situation you described is very similar to the above. To put it this way: the fact that you can’t “just throw an exception” is not a bug, it’s a feature. It is the result of the type system requiring you to handle errors. It might have been convenient to just throw an exception, but at that point, the code would probably have become incorrect, since the users of the now-fallible function don’t get notified that they need to handle errors, which basically defeats the point of error handling.

It might be painful, but the pain comes from (and is inherent in) the need of handling errors and doing it correctly, and is not present because Rust is doing error handling wrong. Rust, unlike many other languages with conventional unchecked exceptions, merely exposes this pain, instead of letting it go unnoticed – which is why we are using this language in the first place… :slight_smile:

By the way, a useful technique for improving user experience in the specific situation you described is this: make that function return Result upfront! Even if it returns Result, you don’t have to handle errors correctly right away. You might just unwrap everything in the function body, and return Ok(…) at the end. But then you will write the rest of the code (that uses this function) with the correct assumption about its fallibility in mind, and thus the changes will be limited to the body of the function once you start fleshing out real error handling.


I can argue this way and rationalize why the pain is the good kind of pain… but that doesn’t mean the pain is not there. If we decide there’s no solution (and close the discussion once for all that there’s none and could point to the solution), then OK. But still, one has to wonder, if there could be less of the pain ‒ if not „have your cake and eat it too“, then nibble a bit. You mention the borrow checker and I’ll take you up on that. While there’s certain pain about it and we all believe it helps us write better programs, it didn’t stop people from coming up with NLL that eliminates some of the pain where it was frustrating, because the compiler wasn’t pointing out actual errors.

So, as a really crazy idea, would it make sense to think if higher-order functions could be written in fallibility-agnostic way? A map that, when called with infallible function returns an iterator, and in the fallible case returns Result<iterator>? Or even better, come up with mechanism that could be used for being both fallibility-agnostic and async-agnostic, would allow being agnostic to something completely else (eg. support IResult with 3 variants)? Not saying this is definitely a good solution or even a direction, but trying to say: if there’s a specific problem on the table, we can search for a solution. But „error handling is hard in Rust“ is hard to act on, same as correctly pointed out, it is hard to act on „being like C++ is bad“.

Sure and I do that (that’s why I needed to think hard to come up with the scenario). But the advice kind of requires time travel at the point where you already didn’t do it.

So the solution might actually be as simple as adding a tip to the error handling chapter of The Book. Or, it might help somewhat.


Koka has composable, extensible, and generic effects. What that means is that you can write a function which is error-handling and async-agonstic.

For example, a map function which has the correct behavior regardless of whether the callback has error handling or not, regardless of whether it is async or not, etc.

I doubt Rust will go in that direction, but it’s super cool that it’s even possible to do it in the first place.


In the effects-design space there’s also McBride et. al’s very interesting paper on Frank.