Pre-RFC: flexible `try fn`

I really like 2 rules from python:

  1. Explicit is better than implicit
  2. There should be only one obvious way to do a thing

This suggestion breaks both for very little benefit.

I can add more rules that apply here: Special cases aren’t special enough to break the rules. Simple is better than complex.

It seems this is adding complexity to solve subset of some special cases at the cost of significantly changing the language.


In general, this seems like a good idea, but I’m afraid there’s a bias here. It’s much easier to write a blog post „If we made this change, it would have these advantages and these disadvantages of the current system would go away and how great it would look like“ than „You know, there are some little paper cuts, but overall what we have now seems like the best thing in the whole industry“. Part of it is that we know the disadvantages of the current system, but only guess at the ones the new one could bring and people tend to be optimistic about this. It’s a variation of the classical „Nobody will ever give you as a good thing as I promise you“.

However, I feel there’s a bigger problem or bigger form of bias here. I remember a very similar proposal to have surfaced already several times. Every time, it got quite a loud opposition, heated discussion, bad air. In other words the very opposite of „consensus“. It was deemed „controversial“ and it died. It wasn’t about bikeshedding the right name for something, but about a deep conceptual disagreement with the change as a whole. Yet we see it appear again and again, to have the discussion again and again. This sucks energy of all sides. Are the authors just hoping it’ll get through this time? Or that they know better than half of the community (based on the, arguably small, statistical sample of people actually discussing the issue)?

Because if one side wants to keep the status quo (maybe with some minor tweaks), it has to defend it every single time. If other side wants a change, it needs to get through just once and there’s no reasonable way back, due to stability promises. Is there a negative RFC template in a form „We don’t want this and let’s not discuss it again unless one of these conditions change“ or something? The C++ is a pretty good case study in the sense that what is not included in the language is often more important than what is. The standardization committee are very smart people, yet they repeatedly add more bloat to the language. I’m afraid it is some kind of Ivory Tower problem there.

Yet there’s no real way to remove things from the language and I haven’t seen a process to decide on a feature that it is unwanted (there are some features that the folklore knows are unwanted, but to my knowledge, these are not written anywhere). Are we doomed to end up with C++ multi-tentacle monster due to how the RFC process works, only much faster, because Rust has more agile process?

Sorry if I sound harsh a bit at places. This isn’t supposed to be personal, it’s a professional disagreement, and more with the „crowd mind“ than with any particular person. If I wrote something that feels personal please understand I had a very bad sleep last night due to how this thread made me sad, so my social feeling might not work as it should. Still, I felt it is better to write this than to keep it for myself.


I don’t think this is the right way to look at it. break doesn’t actually need to do any wrapping, conceptually. The block evaluates to the success type T, and break represents early return from that block. try then wraps the value in a type which can also represent the failure case.

Edit: Of course, the difference then is that break-from-try is an inner break, while break-from-loop is an outer break, i.e.

'loop: { loop { break 'loop <value>; } }


try { 'try: { break 'try <value>; } }
1 Like

@Centril @newpavlov Unfortunately what both of you propose, one or another way, is an introduction of yet another syntax form for a more or less stabilized feature.

Introduction of try fn. As someone mentioned above, there’s a much more natural form fn x(...) -> R throws E which, as I understand, was already proposed. try fn only introduces different function declaration syntax just to have some symbols shaved-off, return Ok(...) -> return ... and Err(...)? -> fail .... Yes, I read initial post, with at least two more traits added to stdlib. Is it worth? I doubt so.

Your master plan effectively includes

  1. Introduce auto-wrapping based on function declaration.
  2. Introduce different meaning for break keyword
  3. Introduce auto-wrapping dichotomy, where fail always wraps as Err, but return and break don’t.

You intend on introducing a bunch of corner cases just to transform

fn foo(arg: usize) -> Result<usize, usize> {
  if condition(...) { Ok(arg + 1) } else { Err(42)? }


try fn foo(arg: usize) -> Result<usize, usize> {
  if condition(...) { break arg + 1 } else { fail 42 }

Please consider amount of cognitive load to learn both normal path and your path, with all of their variations.

If you ask me, I would look towards something like this:

fn foo(arg: usize) -> usize throws usize {
  if condition(...) { arg + 1 } else { throw 42 }

Because it explicitly separates success and error types. The problem with it - hiding of the fact Result is used under the hood.


Well, why not try writing “negative RFC” titled like “Decline attempts to introduce exception-like syntax or semantics, without strong argumentation and/or major support from community”? Unfortunately I’m usually too lazy :slight_smile:

This is an incredibly small sample of the community, especially when the same tiny group seems to monopolize these conversations every time they come up.

I understand that it feels like you have to keep making the same point over and over within the same thread, but I don’t think that’s necessary. Make your point once, bring up the tradeoffs you see, and let the other side of the argument do the same. There will be upsides to proposals you don’t like, even if you don’t care about them. If the process doesn’t advance (as it notably has not in this case), you don’t need to make your point again!

Many times, the language team hasn’t had a chance to even read the thread before it spirals out of control like this one, because every little bit of discussion makes you feel like you’re losing the fight. Those that do argue for the proposal you hate often don’t have a strong opinion one way or the other yet—they may bring up counterpoints just to have them on the table, or to explore the design space. And, you should note, they often do wind up agreeing with you! See @josh’s posts upthread, for example.

I hate to make this thread even noiser, but from where I stand it really looks like we need to put more trust in the process. The process is not a vote, the process is not a popularity contest, the process is not a shouting match. The process is a Request for Comments—i.e. a call for the community to describe how a proposal interacts with their use cases. Can we all leave it at that instead of feeling the need to shout down every single comment that we disagree with?


I’ve really enjoyed reading this thread, as long as it’s become, so I’d first like to thank everyone that’s contributed. I can say that as the thread evolved, I’ve liked what’s being proposed more so than the original proposal. But to add my $0.02, I think scaling back the proposal somewhat would hit some of my pain points without changing the language too radically.

What I like from what’s being proposed:

  • A fail keyword that wraps
  • Adding a trait or traits to make any keywords introduced can apply to user-defined types
  • A try block that resolves to an impl of that trait

What I’m less comfortable with:

  • Annotating fn with try…I feel that a fn that returns a Result or impl of the new trait is a natural try boundary
  • Overloading the meaning of break
  • The proposals for Ok() wrapping. Of the terms proposed, pass feels the closest. I could also see ‘resolve’ working, but maybe that’s just due to working with JS promises recently. But I’d almost rather have the asymmetry of only having a keyword for Err() wrapping.

The main hope, I guess, is that improving error handling is an evolution rather than a revolution. Making smaller, limited changes that aren’t too jarring seems preferable to trying to come up with a complete overhaul all at once.


Let me add a vote of qualified disapproval for the proposal, or rather one of strong “meh”. I’ll expand on my reasoning below, but the TL;DR is that while I see the point of autowrapping, I’m not convinced that, in the proposed form, it’s worth the complications that the change would bring to the language.

Up front, I must say that I don’t automatically support appeals to explicitness. I recognize that people have passionate convictions about it, enough that the phenomenon has already resulted in at least two in-depth discussions. I’m not going to repeat the arguments here, just mention that a) Rust already has a fair amount of magic in the form of autoref, autoderef, reborrowing, lifetime elision, closure capture inference, ?-induced error conversion, etc., and b) the introduction of some of those mechanisms elicited its share of dire warnings about loss of explicitness, among others. I wasn’t around for lifetime elision discussions, but I did follow the ? saga from the beginning.

That said, it’s very helpful to be able to reason about some properties of a program without a lot of indirection. I don’t think I’ve ever had any doubts about heap allocation or dynamic dispatch, for instance. (I don’t mean to imply that either of them is bad per se, far from it, just that one should be aware of them in certain contexts.) I’d prefer to keep fallibility of a function in the list of immediately obvious characteristics, as well, and to its credit the proposal under discussion doesn’t try to hide the Result (or any other) return type.

However, the principal downside of the proposal as I see it, especially in its updated form, First, it’s Result-centric to the detriment of other wrapper types. Presumably, if used with Option, you’d have things like fail None. A bit unfortunate IMO. Second, is that the scope of changes is not small—two new keywords and the re-purposing of an existing construct (break).

(Edit: for Option, None is just fail;, which is OK.)

In an earlier topic, I tried to find out some numbers behind certain elements of that proposal. My source and methodology may not have been unimpeachable, but I consider them more informative than pure speculation. While the “editing distance” incidence wasn’t too impressive, the use of Ok(()) was found to be pervasive. I do consider it a mild irritant myself, but I think I’ll happily continue using it, rather than have a wrapping solution which brings too many other complications.


I think that is incorrect; it would simply be fail;

1 Like

Right; I’ve edited the post.

1 Like

This seems like how it should work, but it relies on a coercion from () to NoneError which doesn’t exist.

Edit: I guess the proposal assumes Option<T>: TryThrow<()> or whatever anyways.

This assumes that it would be based on () which is not necessarily how it would work. The design of Try, NoneError and all of this are very much in flux, so such assumptions shouldn’t be made.

See The desugaring of throw; in the throw RFC for a discussion.

1 Like

Could this feature be implemented a library-based decorator instead of a language feature?

fn foo() -> Result<usize, ()> { 1 }

IMHO the decorator approach makes it clearer that #[try] is just an implementation detail. Furthermore it allows a function header in an impl-block to look the same as the function header in the corresponding trait declaration.


I like this solution a lot! I think this discussion has proved that this an controversial topic. This way the people who want to can use the crate without changing the public api.

I worry a little bit about writing here, after being chastised for „monopolizing“ the discussion, but I guess throwing an idea in still counts constructive, and that an idea with actual code is worth one more comment. I apologize if this was brought up before somewhere, but I’m not aware of it (if so, please share a link and I’ll cross out the post so it doesn’t distract people).

I like the idea of the attribute and doing it in a library. And I wonder if this could be stretched even further. So, as a mental experiment, could we implement the try block in this way, as a macro? I know the try block is in a merged RFC, has a keyword reserved, but it hasn’t been stabilized yet, so there’s at least theoretical chance to still play with it. Naming in the examples is explicitly stupid to not suggest anything (eg. bikeshed later, after deciding if this could actually fly).

I don’t think I can do exactly the try block semantics, but I can do a block that stops all early returns:

// Yes, only a lambda that is called right away, nothing fancy
macro_rules! stop {
    ($( $body: tt )*) => {
        (|| { $( $body )* })()

let x = stop! {
    let a = 2;
    let b = 2;
    if a > 0 {
        return a + b;
assert_eq!(x, 4);

This one does not ok-wrap the final statement (yet, see below). This on one hand makes it more verbose (and needs the Ok(())), on the other hand, makes it work for non-error-handling situations too (like, if I have something like nom 3’s IResult, which has 3 variants, or iterating through something and early-returning first matching value and providing fallback after the for). But it does support the use case of trying to run a series of operations and getting a result of the whole, which is AFAIK the main motivation of try block:

let result: Result<(), Error> = stop! {

Now, because some people do want ok-wrapping, let’s implement that too:

// If we want it to run on stable, we can use Ok
// there instead of from_ok, at the cost of flexibility.
macro_rules! wrap {
    ($( $body: tt )*) => {
        (|| { ::std::ops::Try::from_ok({ $( $body )* })})()

fn do_stuff() -> Result<(), ()> {

let y: Result<(), ()> = wrap! {
    // See? No O(()) here.

Pros & cons

  • (+) It is very minimal and low-cost. It can be put into a crate. It can be used right now (or, after someone comes up with reasonable names and puts it onto, without an RFC process, support in the compiler, etc.
  • (+) It is more flexible by supporting other use cases than just error handling.
  • (+) Due to the low cost, both wrapping and non-wrapping variants can be supported.
  • (-) The macro syntax is less visually pleasing.
  • (+) It would allow experimenting with it first and then promoting one (or both) of them to a keyword, if it is deemed it is used enough. Furthermore, there was a contentious discussion if ok-wrapping is a good thing or not. The decision to make it wrapping was made, but on predictions. This experimentation could provide some factual numbers, which are generally a better indicator. On the other hand, it is in a sense a solution by postponing the problem and could lead to re-opening old discussions.
  • (?) The biggest difference is that return from within a try block returns from the function, while this returns from the block. It’s not possible to exit the outer function from within the wrap! macro. One one hand, this is a limitation. On the other hand, it makes the need to have throw/fail/pass control flow keywords go away (if I understand it right that breaking out of try is the main motivation for these) ‒ the core language already supports everything needed to make it useful. And I personally believe it would make thinking about flow control in the function easier, that the try's property of being able to return to two different places is in the general direction of goto. I don’t know how often one would want to exit the outer function from within the block.


  • While related to error handling, like the original proposal, and it got inspiration here, I don’t really want to hijack the thread if it would be considered inappropriate. Should I move it to a separate thread?
  • Are there any further pros & cons I’ve missed?
  • Is the difference in not being able to return from the outer function important?
  • Does it sound like a possible step forward around error handling to anyone?
  • If the answer to the above is yes, would it make sense for some of the major error handling crates (failure, error-chain) to adopt it? I can put up a crate myself, but if it got included in one of these, it could gain more popularity and the experiment would be more valuable.

Quoting the context here just because I wasn’t able to respond quickly

To be clear, I was not criticizing this proposal to the point of saying it leads to incorrect code. I was critiquing the assumption that error handling code is different from any other code. “Error handling” is not a problem domain that can be solved in advance. You have to look at each program to know what is an error and how it should be handled. So “error handling mechanisms” are never more than just control flow mechanisms that may be used for error handling.

Well, no, you’re not talking about it. My point was that I wish we were, because it seems to me that it would better interact with the main semantic feature of “error handling” code: that we often don’t care if it performs poorly. But still, we don’t know if that’s true in advance. Some projects do need high performance error handling, which is one of the reasons C++ - style exceptions are a bad idea. Not everybody can use them, but everybody still has to reason about them.

The relevance to this proposal is that it is again a thing that not everybody will want to use, but they’ll still have to deal with. (Really only if we’re going to declare it idiomatic and lint on it – personally, I find the proposed style less readable and arbitrary, so it would sadden me to see it declared idiomatic.) If we’re not doing any of that, then I’d be fine with just never using try fn.


I don’t see how anybody could do better than return Ok(value)

Just because non-locality exists doesn’t mean more of it is acceptable. The issue here isn’t non-locality (in my opinion), but ambiguity. Today neither of return Ok(value) or return value are ambiguous. Adding ambiguity to return would make almost all control flow harder to reason about. A new keyword works around that, but at the expense of breakage and arbitrariness. So it should be very difficult to come up with an auto-wrapping proposal that doesn’t lead to significant discontent.


I was not sure I was grasping how stop! stops early returns, @vorner confirmed the following example is correct (see

fn f() -> usize {
    let x = stop! {
        let a = 2;
        let b = 2;
        if a > 0 {
            return a + b;
    return 2*x;
assert_eq!(f(), 8);

With label-break-value it should be possible for a macro to handle control flow however it likes. (Probably with a proc macro and an AST visitor…)

Naming collisions aside, it could implement try! { .. } that stops ? propagation by desugaring ?s, that passes through/stops/forbids return, that implements any/all of throw/fail/pass/succeed, that does or does not include Ok-wrapping in any of the locations in question.

Maybe we should throw together a configurable macro that lets people try out their favorite flavors.


One of the major problems with proc-macros like these are they don’t compose with other macros well. For example you say “passes through/stops/forbids return”, but you won’t see any returns hidden behind a macro e.g. the old try! macro or the common bail! macro.


I have to admit I found this comment a bit frustrating, because the idea that infinitely accruing new features has ultimately negative consequences is not exactly a revelation - we’re all very aware of it. The argumentum ad C++ has been made many times before. I’d love it if we could, as a community, move past this framing (just as wish we’d move past “explicit is better than implicit”), into a more nuanced analysis at how languages grow over time and how we can prevent a situation in which the language becomes dauntingly massive. All of us are concerned about this, can we shift the conversation to one which produces new insights?

Here are a few notes of my own toward that end (in no particular order, making no unified argument):

  • Languages change primarily in response to user needs. Backward compatibility means that language change is necessarily monotonic growth.
  • Different users have different - often conflicting - needs based on their different relationships to the language. We must balance these users needs against one another, and be empathetic to users with different needs from our own.
  • Rust at 1.0 was necessarily the smallest version of Rust that will ever exist from now on.
  • The few languages that have grown almost none at all since being released did so by narrowly limiting who they are for (I’m thinking of Clojure as the best example of this; Go to some extent, but I see them as only halfway into this philosophy). The use cases that Rust is targeting are too broad to simply discount most users like this.
  • The 1.0 release established the core identity of Rust (i.e. ownership, borrowing, traits). But it was understood at the time that it was a skeleton of the language Rust planned to be. Rust has been proved out, it is now in the process of being filled out.
  • The current phase of Rust development (“filling out”) will someday end, and a new phase will begin. The big question is when have we “filled out,” and what will come next?
  • More than simply accruing features, the real challenge languages like C++ face is trying to shift their core identity without breaking changes (i.e. from “C with classes” to “safe C++11” and beyond). This seems to be the cause of much of C++'s negative reputation - you’re not supposed to use half of it.
  • If an aspect of Rust’s core identity comes to be seen as wrong, what do we do? Do we make a major breaking change (Python)? Do we backwards compatibly shift our identity (C++)? Do we just accept it and maybe start working on new languages (who has done this? perhaps C?)?
  • Billions of lines of code and a reputation as a pile of junk is the best outcome any language has ever had, if we’re optimizing for impact on society or the industry. Can Rust really hope to have a better outcome in 2050 than C++ has in 2020? Should we really feel disappointed if it doesn’t?

These aren’t meant as challenges to you in particular, and I’d ask you not to respond to them as such. These are open ended statements and questions to try to dig us deeper into this question and past the reductive “don’t be like C++” narrative that I see the conversation currently ending at.