Pre-RFC: Catching Functions


This has been mentioned before in the thread, but generators also have a function-level wrapping effect that parallels catch blocks’ and these hypothetical try fns’. In their case I think it’s much more clear that wrapping is what we want- both iterator-generators and async-function-generators make more sense with wrapping and are routinely written that way in languages where they exist, so treating try fns the same way has more precedent than just the ?/catch RFC.

Another way to look at this is that Result (or whatever Try impl) is a sort of context you’re operating in. It’s likely that your caller will also be using ?, so in that sense it won’t get an Ok(T) on the outside. Instead, for as long as you remain in the Result context, you just have Ts (plus the option of breaking out of the context).

Given these two ways of looking at it, I don’t believe the symmetry between catch blocks/try fns/etc and regular function-level code is one that needs to be preserved. Rather, it should be broken to clearly separate the success path from the error path, at least while in the “fallible code” context.

Incidentally, I think this also gives another way to talk about why I prefer try fn/-> Result<T, E> catch {/etc to -> T catch E. The function signature is often where you transition in and out of this context, so it’s worth preserving the actual return type to keep one foot out of the context, so to speak. It could also make it possible to compose contexts (like async and try), since I have no doubt people will want to use ? and await in the same function.


Bingo! That made all the things click into place for me (it was either „inside“ or „context“, don’t know which).

Which brings me to two things. This whole thing seems very sensitive to wording and explanation ‒ in one mental model, it is complete nonsense, while in another it is crystal clear. I think extra care should be taken to choose the naming and for documentation.

Another is, if we want it to feel the same with other such contexts (eg. async, iter, whatever) ‒ and I feel the consensus is that we do, we should make sure external crates are able to provide their own in some way. Futures (which is the result of async block or function) aren’t part of stdlib, and even if it gets there eventually, other use cases might come up.

So the question is, is it possible to create the catch block without introducing language-level constructs? If it’s a keyword, extern crates are out of luck. If it’s let’s say a proc macro, it could work ‒ and there’s a precedence in futures-await that this might work (not necessarily with the same syntax).

Furthermore, let’s say both Future and Iterator implement the Try. Then it implicitly allows the early exit out of the box (which we want). An idea how this could look like:

fn get_result() -> Result<usize, Error> catch! { // Hmm, unfortunately, the try! macro is already taken
  if !precondition() {
    throw Error::new();
  42 // Not wrapped ‒ Ok(42) is fine, but with generators, it'd be pain

// Or, inside a function:

  let my_result = catch! {
    if !precondition() {
      throw Error::new(); // Could the proc-macro make sure this just exits the catch block? Maybe introducing a hidden closure, or rewriting it to something…

And the same with async!, provided by extern crate:

fn get_future() -> impl Future<usize, Error> async! { // That thing is a generator inside, but implements Future and that's what matters
  let fut = async! {
    await!(do_computation())?; // Hmm, here the wrapping makes it less ergonomic :-(
  match await!(some_other_computation().select(fut))? {

I must admit, that does look kind of consistent.


The most straightforward lower-level language construct catch blocks could be built on is probably labeled-break-with-value. However, that’s a different construct than the one for async and iterators, which are built on generators. Presumably people will come up with other scenarios that don’t straightforwardly map to either of those.

The more general mechanism is continuations (as I linked earlier in the thread) but those are hard to expose in a way that’s simultaneously raw and efficient. The closest I’ve seen to that is Kotlin’s suspend funs (in which I share some Opinions on ergonomics), though that’s still primarily the generator concept. So while it would be good to expose labeled-break-with-value and generators directly, I’m not sure we need to (or can!) go beyond that, and I still think a dedicated try or catch block is required for integration with ?.


Oh, I didn’t mean to be the same under the hood. Let them be whatever they need to… I just meant similar on the syntactical level, so people have the same feeling of them when writing code.


This is a hella long thread which means statistically some of these concerns were probably already mentioned and potentially even addressed. If they were, just consider this plus ones for those.

Rust is a blessing for me. I come from a scala background and before that Java. What I ended up not liking about Scala is that it’s foundation was built on Java the model. It tries really hard to be safe and pure but when your building on top of a model of thrown exceptions you are never really safe. The abstractions leak often despite their good intent.

A few ideas I saw in this proposal and some comments were to make error handing in rust more familiar to what languages others are coming from. I came to rust to specifically to escape the notion of throwing exceptions, not because it’s model looked close to it. I have strong preference for the model of: you don’t throw exceptions, you return values, not unlike golang. That’s a very simple construct for programmers to work within and comprehend. There is nothing “exceptional” about result types. They are just a value type which encodes the notion of an error. I think I saw mention of “raise” as an alternative. That is no different if you come from a ruby background.

I’m trying really hard to split my concern over langauge terms used and functionality proposed. The language used may really be masking that.

My other concern is increasing the syntax within the language when the langauge is already slightly teetering towards hard to learn on the scale most programmers have the patience for. Once syntax is added, its hard to remove. Have we thought hard enough about doing this with library design without adding new syntax to learn? Perhaps some library alternative to unwrap that does something slightly different than panicing? I had some earlier concerns with ?. With much use I’m now adjusted. My problem with ? vs try! is that if one already knows the basics of rust one can easily understand there is probably code generation to inline a coding pattern going on. And that’s exactly what try! did. ? On the other hand accomplishes the same thing but requires a programmer to understand new syntax. Increasing the learning surface area seems at odds with reducing rusts learning curve. For that reason I’m really counting on the community to help keep syntax down to a minimum unless we think the added overhead of extending the learning curve will be outweigh by the reward received by additional syntax.

I think this may have been mentioned a few times. Despite the syntax within a function body I have a strong preference for retaining existing function signatures without change -> Result<A,B> communicates just as much to me as -> A catch B. I feel like the latter actually loses I’m formation as it’s not immediately apparent that the return type actually a Result type without spinning a few extra CPU brain cycles.

Again apologies if any of this is redundant and was already discussed. It’s exciting to see the community engaging like this!


While I’m unfortunately not in support of this PR (so far) I’d like to point out that I find it very well thought-through and written. Great work, @withoutboats!

Same here. Thanks indeed, @chriskrycho and @H2CO3.

To give some real-life example for why hiding Option/Result behind syntactic sugar might be a bad idea for the sake of flattening the learning curve.

I have stopped counting the number of intermediate level Swift developers who managed to get used to optional unwrapping à la …

if let foo = bar { // foo: T; bar: Optional<T>
    // …

… but fail to grasp the whole concept behind enums and the case let syntax …

if case let .some(foo) = bar { // foo: T; bar: Optional<T>
    // …

For them these are completely disjoint concepts. Telling them that those two things are the same and showing them the source code for enum Optional<T> { … } never fails to blow their minds. (I consider this a bad thing. Their response should be a bored “Well, of course.” instead, at this point in their education.)

One might argue that thanks to syntactic sugar they managed to get rather far with Swift without having to understand enums at all. I would argue though that by lifting Optional and the like into a bazillion of syntactic sugars the language actually makes it extremely hard to make the jump from a dev who merely manages to make their code work, to one who actually understands it. This can also be seen in the under-utilization of if case let (generalized pattern matching syntax) in Swift.

Having short-hand syntactic sugar T? for Optional<T> further enforces this misunderstanding of what an Optional actually is. Coming from C++ or Java, most people still think of T? being a nullable pointer-thingy. I would consider -> T catch E similar in nature to what T? is to Optional<T> in Swift, in that they give the wrong impression that it’s something special and completely unrelated to enum.

I dislike T? and if let in Swift for the very same reason that I’d rather not see any exception-like syntax in Rust: They give a completely wrong picture of how the language actually works, that’s damn hard to get rid of afterwards. I’ve been mentoring lots of Swift beginners and this is a constant struggle. There is a strong and negative sentiment in larger parts of the Swift community (mostly new folks) against having to explicitly unwrap Optional<T>, rather than just accessing the wrapped value as is common in Objective-C, C++, Java and the like. It comes from the wrong impression of T? being little more than an annoying nullable object reference [sic!]. None of this kind of sentiment is found in Rust afaict. Why? Because there is no wrong picture of something being drawn in the minds of the users that is to be annoyed about.

So while one manages to lead beginners to a rather advanced point of their learning curve without having to dive into enum and their peculiarities, one makes them internalize a wrong picture of how the language works, which then later on requires them to unlearn and doubt basically all the things (maybe it’s syntactic sugar all the way down and nothing is what it seems?) they just managed to learn so far.

As it happens I have written a lengthy article on why I consider Swift’s generous use of syntactic sugar a hindrance, rather than a help for beginners: “Syntactic Diabetes” and a rather heavy burden on the language. I’d rather not see Rust go the same path for the whole sake of instant gratification.


I’ll just second that my experience with Swift here is a major part of why I feel the way I do about this proposal. That kind of syntax-obscuring-the-way-things-actually-work behavior (in order to simplify the mental model) is pervasive in Swift, and there are places where I think it’s fine—but this is one of my least favorite places of it, for all of and exactly the reasons @regexident outlines.


i believe this is a bad idea,(sorry for the language in advance, i’m not a native speaker)

first, there will be two flavours of code to be seen. the old kind, which shows what actually happens, and the new one, which makes people coming from languages with exceptions believe there are some rust exceptions, but this has actually nothing to do with exceptions in other languages.

second, languages like c++ are going in the direction of rust(std::option, std::expected), for good reason. rust does steer people to a more local error handling - which is much better than catching everything in main in my opinion.

third, i’m afraid that most people who use rust, don’t have the time to look at such ideas, and might be totally surprised if such changes happen - so please don’t rush such things. make it visible for everybody and try to get as much people comment on this as possible. i’m a bit afraid of rfc opinion bubbles. .


This is just a discussion of this general area of language design so far… An official RFC has yet to be filed. Generally, the Rust team and community does a really good job of getting community feedback about new features.

There is also somewhat of a long path ahead for these ideas before/if they make it to the language too. Someone has to open an RFC on Then there will be more discussion. If the feature is accepted, it is implemented on nightly rust first, and people can play around with it and modify it for a while. Eventually, it would be stabilized. At any point during this process, the proposal could be changed or rejected.

The result is a great language that makes people happy and productive :slight_smile:


This is one most valuable aspects of Rust as language for me.
I feel like it’s often overlooked or taken for granted and/or unbreakable.
While in reality it needs to be looked after, protected, hardened.

It’s one of the things that differentiate Rust from so many other languages.
It’s what gave me the confidence to go all-in on Rust (coming from C/C++/Swift) when I found out about it as a language and trust that it will not turn into one of those many “everything and the kitchen sink”-languages.

Yes, Rust is a complex language. But no, Rust is not a complicated language.
Rust’s syntax as well as semantics are based upon a small and easily memorizable set of orthogonal components.

Let me give an example.

(Please excuse the length and what might seem like going off-topic for a bit. It’s not. promised. :confused: )
(For the sake of this example allow me to assume that this Pre-RFC (“unnamed struct types”) had landed by now.)

In Rust there is a hierarchy of types:

Unnamed types (tuples)

An unnamed type can have zero, n indexed or n named members:

  • ()
  • (T, U, …)
  • { t: T, u: U }

Named types (structs)

A named type is a combination of one of those three kinds of unnamed types, plus a name:

  • struct Unit (Rust omits the () here)
  • struct Indexed(T, U, …)
  • struct Named { t: T, u: U }

Once you know tuples, you know structs.
They can be thought of as “just named tuples”.

Sum types (enums)

A sum type is a combination of one or more named types:

enum Foo {
  Indexed(T, U, …),
  Named { t: T, u: U },

Once you know structs, you know enum variants.
They can be thought of as “just unions over structs”,
that need to be pattern matched to access.

While this example may seem completely unrelated to the topic at hand.

Lot’s of things fall out of this:

Pattern Matching

Matching on tuples:

match () {
    () => …,
match ("indexed", true) {
    (name, ..) => …,
match ({ name: "named", flag: true }) {
    { name, .. } => …,

Once you know how to pattern match on tuples,
you know how to pattern match on structs:

match Unit {
    Unit => …,
match Indexed("indexed", true) {
    Indexed(name, ..) => …,
match (Named { name: "named", flag: true }) {
    Named { name, .. } => …,

Once you know how to pattern match on structs,
you know how to pattern match on enum variants:

match Foo::… {
    Unit => …,,
    Indexed(name, ..) => …,,
    Named { name, .. } => …,,

By making use of carefully chosen combinations of orthogonal language features
Rust allows one to apply very few rules (with almost no exceptions!!!)
to a broad range of uses, such as:

  • Every type can be destructured/pattern-matched.
    Tuples, structs, enums. No secret sauce necessary.
  • Every type can implement functions.
    Tuples, structs, enums. No secret sauce necessary.
  • Every type can be used as reference or value type.
    Tuples, structs, enums. No secret sauce necessary.
  • Every type can be used as mutable or immutable value.
    Tuples, structs, enums. No secret sauce necessary.
  • Every type can be <enter characteristic here>.
    Tuples, structs, enums. No secret sauce necessary.

Mutability and reference-typed-ness are realized through composition, rather than by introducing special cases. Same goes for memory management, atomicity, … you name it.

… see a pattern here? Rust is composition all the way down. :heart_eyes:

It’s these clearly visible patterns and the lack of exceptions to most rules that make Rust,
while with no doubt one of the more complex languages, such a joy to learn, teach and use.

Learning patterns scales linearly by O(n + m). (:point_left:t2: Rust, a complex language)
Learning cases/exceptions scales by squared by O(n * m) (:point_left:t2: Swift, a complicated languages)

Instead of learning how to destructure/pattern-match, call functions on, borrow, … tuples,
and then having to learn a completely different set of rules for structs,
and yet more and utterly different rules for enum variants, one just has to learn it once for the lowest abstraction and then build the higher ones from it.

I don’t know any other similarly complex language with such a rich set of type-flavors that still manages to not require one to memorize a bunch of exceptions and sugars for every single one of its rules/parts, as is the case for most other non-trivial languages.

So what does all this have to do with catching functions, again?


Syntactic sugar that simplifies the use of some things, at the cost of obfuscating their true nature have the ugly downside of making it impossible for the observer to see the language patterns that one would otherwise find easily. Without patterns one is forced to memorize every single configuration separately as individual cases, again blurring the semantics that unify them. This is what happened to Swift.

Catching functions —I fear— risk becoming one of those sugars, that do more harm than good.


I’ve seen a number of comments along this line in this thread, and I’d like to request that those making them take a closer look at what they dislike about exceptions and elaborate on how the proposal is changing what rust currently does in those areas.

For me, I like Result+? over, say, C# exceptions because:

  1. The possible errors are explicit in the type of the function
  2. All the places that can fail are clearly marked with ?
  3. The return value is a real type, which can be saved or restored, put in containers, passed to methods, etc.

There’s nothing in this proposal that changes either of those things, so I don’t understand the doom and gloom that seems so commonly expressed.

It probably could, as try a? + b? is somewhat reasonable, but I think forcing the block is easier and more forward-compatible than trying to pick exactly what the precedence would need to be.

Well, here’s a trivial and obviously-contrived function that’s “infallible”:

fn foo_1(v: &mut Vec<i32>, x: usize, y: usize) -> i32 {
   v[x] + v[y]

I want the fallible-instead-of-panicking version to be as simple:

fn foo_2(v: &mut Vec<i32>, x: usize, y: usize) -> Option<i32>? {
   v.get(x)? + v.get(y)?

The existing ? lets me explicitly mark things as fallible when I want the by-far-the-most-common interpretation of “continue along only when successful”. With “function-level-?” (or whatever this becomes), it lets me say the same “the ‘normal’ path is the successful one” for the entire function.

main is successful when it reaches the end. #[test]s are successful when they reach the end. Infallible functions are successful when they reach the end. -> ! function are, in a sense, never successful because they never reach the end. Most fallible functions using ? are also successful on reaching the end today; this would just help solidify that.

(Part of me even wants to double-down on this and only allow ? and throw in “catching functions”, like how C# only has await in functions that are async. That way they’re one “continuing along means success” feature. But that’s a more hardline version than I expect would get traction.)

Actually, I think my knee-jerk reaction was premature. I think it’s fine for |x| catch { ... } to be function-level catch because

  • Most of the time they do the same thing,
  • A function body containing only a catch is pointless, and
  • Someone could do |x| { catch { ... } } if they really wanted to.


That could go wrong once macros are involved. You’d always have to pass the catch block inside { ... } to macros to ensure you get expression semantics.


My inclination is that having these sorts of weird special cases will make life difficult in the future.

Error ergonomics

I was wondering about this. Seems like it’d be too much deprecation though, if nothing else.


One thing we could do, which is potentially really contentious (…so hear me out haha) is leave ? solely for non context-based situations, put no annotation on catching functions inside catch blocks, and instead put the annotation on the case where you want a Result value in a catching function.

So you could write any of these (bikeshedding aside, I’m just demonstrating the non-usage of ?):

// traditional style
fn try_f() -> Result<T, E> {
    let x: T = try_g()?;
    if fail() { Err(e)?; }

// catch context inside the function
fn try_f() -> Result<T, E> {
    try {
        let x: T = try_g();
        if fail() { throw e; }

// catch context as the function
try fn f() -> T {
    let x: T = try_g();
    if fail() { throw e; }

// when you actually need a Result
try fn f() -> T {
    let x: Result<T, E> = <some annotation> try_g();

Now, the reasons I think this might actually work out: It solidifies the distinction between the two contexts (like C# async functions), preserves the distinction between early-return and non-early-return, and is backwards compatible.

Finally, two analogies in support of this idea: unsafe blocks and Kotlin coroutines.

I’ve compared this to unsafe before- you don’t annotate every single unsafe operation, you just use them in an unsafe block or function. The context itself is enough. Notably, you can also write unannotated safe operations in an unsafe block and that doesn’t seem to be an issue either.

Kotlin’s coroutines work this way and they look really nice. Like C# async/await, they are a compile-time state machine transformation. But instead of littering awaits (or await!()s) everywhere, they just work via contexts:

// some coroutine, aka a Generator
suspend fun do_some_io() { ... }

fun non_coroutine() {
    // do_some_io() // ERROR: unlike C#, can't accidentally fire off an async op in a sync context
    runBlocking { // runBlocking() takes a suspend fun closure and runs it to completion
        do_some_io() // this is implicitly awaited
        val x = async { do_some_io() } // async() takes a suspend fun closure, launches it, and returns a future
        x.await() // await() is just a suspend fun (which is thus implicitly awaited here)

Here, runBlocking is similar to ? in that it lets you call a “special” function in a non-special context; implicitly awaiting is like implicitly try!()ing; and async is like the annotation to give you a Result from inside a catching function.

I’m a huge fan of this for async/await because await!()-heavy code is really noisy, so I thought I’d sketch out what it looks like to apply the same idea to Result. I kind of like it!


Interesting idea. If I am understanding correctly, you mean that we would not be able to see where errors are potentially propagated from inside a try block? (but rather it would be sort of implicit?) If so, that feels like a big step backward – I frequently rely on ? to understand control flow when reading code. I am also not entirely sure how we would setup the rules at all to work this way (does it apply to any function that returns a result?)

Still, interesting to think out of the box :slight_smile:


That’s correct- I justify that to myself by comparison to unsafe (not sure which operations are unsafe) and by comparison to Kotlin (not sure which operations can suspend), since those both seem to have worked out, but error handling certainly could require different tradeoffs.

No- only to try fns, which may or may not even look like they’re returning a Result depending on how that bikeshed gets painted. (Much like you can only call Kotlin suspend funs from within other suspend funs, and must use something like async or runBlocking otherwise.)


This is why i feel like the name unsafe is a little bit unfortunate. In my mental model its not even about an unsafe code block. Its not that Rusts safety is turned off here, i can just do more than i am usually allowed to here. Therefore i don’t think this analogy works really well in this example. Please don’t take away my lovely ?s it is such a step forward.

And to get a bit more on topic i strongly resonate with @regexident and introducing to much syntactic sugar and hiding essential basics of the language can fire backwards. I have made roughly the same experience with Swift at my workplace. It just so happened that it spread at my coworkers that work on iOS roughly the same time i get interested in Rust. Despite the fact that they use Swift a significant amount of time at work and i have just a few hours for Rust in my spare time … i felt like i had managed to understand Swift better with the knowledge of Rust than my coworkers in the early days of our Swift endeavor. The fact that Swifts ?s are nothing more than enums and that i showed them how they work by just creating custom Optionals by responding: “WOW, cool – thinking – nice to know!” was a little bit concerning. And Swift has a ton of sugar and at least to ways to express the same thing (and maybe don’t let the user know its the same thing) that makes me really appreciate what Rust gives me today. I am really in favor of -> Result<String, String> and putting catch just anywhere (in front of the function like #async or behind the return type or any version others have suggested). I have to say -> i32 catch Error does look nice (i often work with Java) but for the things i love Rust i hesitate to want this in the language :confused:


The problems with Swift seem to have more to do with the fact that once you have an Optional you can’t just destructure it the same way you can with a normal ADT. This is not true with this proposal - a catching function returns a Result, which is still just a library type like any other.


That’s strictly not true, actually. You can drop this in a Swift playground and see that it works just fine—and identically with spelling out an optional type explicitly:

let foo: String? = nil;
switch foo {
case let .some(str):  // or case .some(let str):
    print("HEY, \(str)")
case .none:

And this is precisely the point I’ve been concerned by. It’s extremely easy to develop the wrong mental model of what’s happening when certain kinds of sugar are introduced. That doesn’t in and of itself make the proposal here bad, of course! But I think it shows how easy it is for these kinds of syntactical sugar to have some unexpected downsides in terms of what they communicate even among really sharp and well-informed people.