Pre-RFC: flexible `try fn`


I find the term “Ok-wrapping” unfortunate, since it inherently frames it from a perspective where not doing it is the default, giving it the appearance of a weird implicit behaviour. The opposite framing, that not doing it is "implicit tail ?" is equally if not more valid, since it frames try as the direct inverse of ?, taking an expression producing a value which potentially branches with an error and turning it into a value representing both the success and failure cases. Success and failure values are both being wrapped; referring to “Ok-wrapping” as a stand-alone feature conceals the fact that not doing it is asymmetrical.



I find it interesting that Python is often mentioned like that. Python has a lot of baggage and a lot of weird things (like being a “batteries included” language that has no ISO 8601 dates in the stdlib…).

Ruby is also BFDL, which turns to a lot of oddities. You know that thing with “the principle of least surprise”? That surprise is explicitly Matz Surprise (“The principle of least surprise means principle of least my surprise.”)

I also don’t believe that BFDL improves things. The way C++ went has nothing to do with having a committee, it has to do with being a language from the 80s. Many ideas that sounded good back then turned out to be bad. We can avoid those mistakes.

We’ll make our own instead. We’ve already made some.

Java is also design by committee and did turn out an extremely coherent language.

BFDL is an old-fashioned process of the 90s. Rust wouldn’t be better if some of the leading figures where crowned king and having final say.

I’m one of the “wait a little” crowd, regularly. I have discussed in a couple of RFC (especially around the module path changes) to not make them (at least the first couple of RFCS). Be reminded that module path changes are a thing that were struck down multiple times before the RFC was accepted. These opinions are heard.

People are too attached to the RFC process directly. Yes, the RFC process gives no venue for overarching discussion. Because its focused on details. Also, if you want to work on an RFC level, yes, you totally have to make the time. But there are other things, like this forum.

Also, we do make offers for people that don’t have time for RFC and encourage them to write about what they want to see in the language: The #rust2018 blogging campaign did that. Turns out, many people want to see changes. Not many people asked for a cooldown.

If you have a different plan for the wider evolution of Rust, everyone can write a long form text here or at your blog. People listen and take that stuff into account. Just be aware that not following your sketched path (in full) is also always an option :).

We are very aware of many problems the RFC process has.

I don’t find @withoutboats comment derailing, it lands precisely at the right spot. Its point is not “don’t say we should do this”, its point is “this is a very cheap argument, frequently made, that we can’t interact with”. It asks for nuance. Pointing to C++ and saying “accumulating features is bad” doesn’t help us. You can’t interact with that - it would ask for freezing. Pointing out how this feature might not improve things enough to cover the implementation cost helps us. That’s something that can be interacted with.


Fortifying the process against feature bloat

I wen’t back to this and kept thinking. I have to say: thank you very much for the link. Why? Because this is the first instance of problem statement in this thread I’ve managed to find (a bit vague one, but good nevertheless). I really believe a problem statement helps a lot both in having meaningful discussion and for finding good solutions.

And after reading it, I found out I have to change my mind in regards „There really isn’t anything to fix around error handling“ I believed in before.

The described situation (paraphrasing a bit): „I have a function prototype, full of .unwrap() and now I want to do it properly with error handling. If it was java, one would just throw an exception.“ I searched my mind, tried to imagine a scenario where it could go wrong:

  • I change the return value to Result<Whatever, Error> (cool, we have failure crate, it takes care of wrapping any kind of error, just like exceptions do), replace all .unwrap() with ? and wrap the last thing in Ok( ). Compile… meh, there’s a return, so that one to wrap too. That wasn’t that hard. Somewhat mechanical (:thinking:could RLS do that? Or cargo somethingsomething?), but easy, let’s move on. Not really worth crying about much for me.
  • Compile… Crap, someone actually calls that function. If I’m lucky, it’s in a function that already returns Result, so just adding ?. If not, another level of ↑. Yep, this is getting a bit tedious, maybe I should do it after lunch when I don’t have the mental capacity for anything better anyway… but I should at least look at these functions if the new error handling doesn’t break anything.
  • Oh nooo, I call it from somewhere in the middle of a very long method chain. I have to rewrite the whole thing, because now Results are everywhere. How do I early-exit the iterator if there’s an error? Would it be better written as explicit for? git reset --hard, you know, the errors don’t happen that often in practice anyway…
  • [Alternative]: Oh noooo, I need to call this function from a callback I pass to something from another crate. There’s no way I can get the Result through that, how can I extract my errors out? I guess I’ll just have to place the unwrap() at the top-level of the callback.

My take from this:

  • Yes, there are pain points worth fixing in Rusts error handling.
  • Do we already have collected, or should we, collect some scenarios and stories of when the error handling is really annoying? I mean, Ok(()) pops up a lot, but doesn’t seem to be the biggest pain point, and several people hinted at such. It’s not nice, but IMO probably not worth some big global changes. A constant (std::prelude::DONE)?
  • I agree this is easier in Java (or something) where I just throw an exception, catch it 7 levels up in the call stack and deal done. However, I think this is ergonomics of the exception semantics, not their syntax. Replacing return Err(whatever) with throw whatever doesn’t really bring me much, at least not in the above story I could come up with. I don’t know if exception syntax is more ergonomic or just a byproduct of their semantics (because exceptions just need special syntax). It might make the step 1 slightly faster (maybe, maybe not, but for the sake of the argument, let’s say it does), but doesn’t solve the real problems.

(And no, I don’t have a solution for the problem highlighted here)



Unless you are calling the function from a method handle on a streams method (or some other context where checked exceptions weren’t accounted for).



@vorner It’s worth mentioning that this problem of “viral error types” is why I’ve always been a big fan of the “enum impl Trait” idea in its various incarnations. While that doesn’t help much in the 0 errors -> 1 error transition, it helps immensely with any M errors -> M+N errors transition.

I’d also like a targeted solution for getting rid of Ok(()), without any other wrapping or scoping sugars, just as an extension to the already special treatment of (). I should probably write an actual proposal or blog post about this at some point.

I currently am not aware of any proposal that, in my opinion, makes a big difference in the no errors -> any errors transition. I think some have claimed that try{}/throw/etc help with this case, but that’s something I’m skeptical of, and would need to see a little bit of evidence for. Unfortunately every time I’ve asked a fan of try{}/throw/etc to dive into that claim, I didn’t even get a toy example where it made a significant difference. So if anyone knows of an argument that’s any less subjective than “I think it would help”, please bring it up, because I’m becoming convinced that there is no such argument.

1 Like


This specific situation is not really a pain point with Rust’s error handling though, is it? It might seem like it is because this sort of thing is usually not painful in other languages. However, I think it’s more like ownership and borrowing. In a language with a less powerful type system, where there is no ownership or borrowing with the RWLock pattern, it might well be “easier” to write certain kinds of code and get it to compile, because the compiler doesn’t constantly complain about moved-from lvalues or simultaneous mutation and aliasing. However, this is an illusion: chances are that in those other languages, the code which compiled but of which the equivalent wouldn’t compile in Rust was simply incorrect, and had the wrong semantics!

The situation you described is very similar to the above. To put it this way: the fact that you can’t “just throw an exception” is not a bug, it’s a feature. It is the result of the type system requiring you to handle errors. It might have been convenient to just throw an exception, but at that point, the code would probably have become incorrect, since the users of the now-fallible function don’t get notified that they need to handle errors, which basically defeats the point of error handling.

It might be painful, but the pain comes from (and is inherent in) the need of handling errors and doing it correctly, and is not present because Rust is doing error handling wrong. Rust, unlike many other languages with conventional unchecked exceptions, merely exposes this pain, instead of letting it go unnoticed – which is why we are using this language in the first place… :slight_smile:

By the way, a useful technique for improving user experience in the specific situation you described is this: make that function return Result upfront! Even if it returns Result, you don’t have to handle errors correctly right away. You might just unwrap everything in the function body, and return Ok(…) at the end. But then you will write the rest of the code (that uses this function) with the correct assumption about its fallibility in mind, and thus the changes will be limited to the body of the function once you start fleshing out real error handling.



I can argue this way and rationalize why the pain is the good kind of pain… but that doesn’t mean the pain is not there. If we decide there’s no solution (and close the discussion once for all that there’s none and could point to the solution), then OK. But still, one has to wonder, if there could be less of the pain ‒ if not „have your cake and eat it too“, then nibble a bit. You mention the borrow checker and I’ll take you up on that. While there’s certain pain about it and we all believe it helps us write better programs, it didn’t stop people from coming up with NLL that eliminates some of the pain where it was frustrating, because the compiler wasn’t pointing out actual errors.

So, as a really crazy idea, would it make sense to think if higher-order functions could be written in fallibility-agnostic way? A map that, when called with infallible function returns an iterator, and in the fallible case returns Result<iterator>? Or even better, come up with mechanism that could be used for being both fallibility-agnostic and async-agnostic, would allow being agnostic to something completely else (eg. support IResult with 3 variants)? Not saying this is definitely a good solution or even a direction, but trying to say: if there’s a specific problem on the table, we can search for a solution. But „error handling is hard in Rust“ is hard to act on, same as correctly pointed out, it is hard to act on „being like C++ is bad“.

Sure and I do that (that’s why I needed to think hard to come up with the scenario). But the advice kind of requires time travel at the point where you already didn’t do it.

So the solution might actually be as simple as adding a tip to the error handling chapter of The Book. Or, it might help somewhat.



Koka has composable, extensible, and generic effects. What that means is that you can write a function which is error-handling and async-agonstic.

For example, a map function which has the correct behavior regardless of whether the callback has error handling or not, regardless of whether it is async or not, etc.

I doubt Rust will go in that direction, but it’s super cool that it’s even possible to do it in the first place.



In the effects-design space there’s also McBride et. al’s very interesting paper on Frank.

1 Like


In that case, I will also have to admit that I find your comment a bit frustrating too, because it sounds as if it automatically dismissed opposite opinion as immature or short-sighted.

While this may sound appealing, there still are a few questions in which the right thing to do is a firm decision. I think a core value judgment in language design is perhaps one such exception. Drawing a line in the sand (after proper consideration and design, of course) is better than endless hesitation and debate about alternatives, continuous wandering in the design space while accumulating technical debt, or settling on compromises just for the sake of compromise, which are then bad for both parties. In this regard, a solid decision is both acceptable because nobody is forced to use Rust (those who don’t like its fundamental ideas can try it out then leave), as well as necessary because in the lack of a very explicit direction and goals, the language won’t go anywhere.

I didn’t question that, although I feel that there are many users, RFC authors, etc. who, even if aware of the problem, don’t particularly care about it, because it is at tension with their personal favorite feature being added to the language.

That’s assuming that accumulating features is necessary or mandatory. However, it’s neither. Even you cited some examples yourself (e.g. Go) which prove the opposite.

But even monotonic growth can be handled well – the key distinction is between “monotonic” and “unbounded”. Unbounded growth in a software system leads to an inevitable decline of quality. So a possible solution to this quality problem may be simply not adding many significant features, instead asymptotically approaching a supremum of complexity, possibly even restricting maintenance to fixing bugs (there are still plenty of them in the compiler, and an escalating language complexity doesn’t help with it).

And what about users who started using and loving the language for what it was earlier? Especially in the light of there existing several other languages out there, with features which are not available in Rust. If one likes programming with such features more than Rust’s approach, they could just use those other languages instead of Rust. In other words, there’s not much point in changing a language with the goal of making it more similar to another language. It’s also by itself a strange idea to suddenly shift the target audience from those who liked Rust for its original profile, toward those who would like it for its future similarities to other languages, or for a completely different paradigm that they hope would be adapted by Rust, and who are consequently pushing it in that direction.

Incidentally, I think a new paradigm could even warrant the design and implementation of a new language — but why take away the existing language from its current users who like it just the way it is (give or take a few rough edges)? We can’t possibly cater everyone anyway, and I don’t think that would be a healthy goal.

On a related note, someone has mentioned it a couple days ago, but I also find it very strange and backwards that the burden of proof is on those who want to keep the language on the current track, and not on those who want to change it. We consantly keep having to defend the position that Rust treats specific constructs the way it does for good reasons, which have been discussed and evaluated thoroughly, which work well in practice, and which were chosen to be different from other languages because they are a better solution than what is found in other languages. And if we get tired of this endless fight, we find ourselves in a situation whereby the language suddenly changes under our feet without good reason.

Apart from how frustrating that situation is, it also makes it impossible to build a solid foundation upon the language. If idioms and best practices change sharply and suddenly, then code written today will become technical debt tomorrow. New programmers won’t understand why something was done differently in the “old days”, because the book, the community, and lints will teach it exclusively the new way. Even with a technically upheld “stability guarantee”, this would pretty much undermine all practical attempts at stability and long-term maintainability. Maintainability and stability of code doesn’t stop at “it will keep compiling”. It includes “it will stay easy to understand and modify by future generations” as well.

Radical changes to the language such as this one can’t really be considered “filling out”. I’ve said this before, but in my opinion, Rust would benefit a lot more from actual filling out, i.e. the implementation of accepted features, the refinement of existing ones, and above all, bug fixes. The act of changing the core idioms would effectively hijack the language.

Yes, that’s exactly what I was trying to describe above. Proposals which try to change the core identity of the language also result in backwards-compatibility-induced bloat, although I would argue that’s the smaller problem; shifting the identity in itself is the more serious one, because it basically makes the the language’s existence pointless.

That is an excellent question, although I don’t see how it’s relevant to this discussion. Anyway, given how well current Rust is designed, it is highly unlikely that its core identity turns out to be just plain wrong within a time frame when the language, or at least its stability, still makes an impact. C is an excellent example of this phenomenon: it was the best way to do systems programming back in the day, but over time, its uncontrolled, unchecked, weakly-typed approach to accessing hardware and resources turned out to be unnecessary and too dangerous. 40-odd years after the birth of the language, we can now design better systems programming languages in the hope of replacing C, without the need of either growing C ad infinitum backwards-compatibly, or changing it and breaking people’s old code. The C standards committee and community seem to have realized and embraced this; unfortunately, the C++ community doesn’t, and now they have an identity problem.

That’s quite a hyperbole. If you look outside the design-by-committee world, you will find plenty of counterexamples. Most functional languages’ users love and admire the mathematic beauty and purity of their favorite language, usually not without a reason. Some major scripting languages, such as Python and Ruby, also have a user base agreeing that the language is good for the purposes it was designed for. Sure, there are many other languages that have a reputation of being a “pile of junk” (and rightfully so), but it’s not a necessity. Why should we set our own standards so pessimistically, then? “It will turn out to be bad anyway, so we might as well not care at all (or let anyone actively mess it up)” is not constructive, because if we think about Rust like that, then what was the point in the first place?

If the Rust community manage to keep the language sane, reasonable, and faithful to its original fundamental values, then I think it can realistically have such high hopes. Otherwise, probably not.

With this claim, you are degrading anyone who is concerned about the growth in complexity of the language, to the level of someone clueless who just automatically contradicts everything with a knee-jerk reflex. Please consider that most of the so-called “don’t be like C++” arguments have solid and extensive technical reasoning behind them, and highlight important technical as well as social/cultural issues.

Again, this ignores the fact that people do regularly give detailed, professional arguments as to why they consider a specific feature or change to be a bad idea. So the claim that the entire opposition can be summed up as “pointing to C++” is an oversimplification at best, a strawman at worst, and in either case is simply untrue.


Fortifying the process against feature bloat

I also don’t believe that BFDL improves things.

I don’t think BDFL is better. It makes clear where decisions are made and it provides a clear venue for saying no for esthetic / language coherence reasons, but doing so by elevating a single person’s preferences above all others obviously has downsides. My point was that languages without a BDFL might have to provide other means for this, such as a trusted committee that takes this role.

I don’t find @withoutboats comment derailing,

I didn’t say that at all? I thought it was a very insightful post! In fact I was just trying to build on that to try and provide a more focused diagnosis of the problem as it applies to this thread.

  1. The core problem appears to be process issues (language issues only come into the picture because there is disagreement about whether, when and how they should be considered).
  2. I think the main perceived process issue might be that there is a lack of visible language guardianship, and no clear venue to talk about this in a positive manner. If you want to keep a certain feature as-is, either out of desire for simplicity or to keep options open for a certain possible future direction, then you have to re-litigate this on every discussion about changes in this area.

FWIW I think the Rust model of deliberation is working pretty well overall especially because people are recognizing where things are not going well and making changes. See the call for blogs, the eRFC process etc. All of that seems incredibly healthy, and ahead of just about any other open source project I know. The issue I’m describing might just be an unavoidable consequence of organizational growing pains, and might not be fixable without throwing away the baby with the bathwater. But I still think it’s worth thinking about.



This is one of the reasons I keep bringing up “implicit await” and "implicit ?"-like designs- they enable this sort of polymorphism without requiring a new “await-if-this-is-a-future” or “?-if-this-is-a-Result” operator. For instance, here, or the “Motivation > Future-proofing” section here.

Not sure if “hide the effect unwrapper when in the effect context” is the best way to achieve this sort of polymorphism, but it’s definitely one of the less messy options I know of.



Just to clarify: Rust’s evolution is ultimately governed by a set of formal teams covering various areas of development. The Language Design team ultimately makes decisions on language-related RFCs, and works closely together (and in consultation with the broader community) to ensure an overall coherent vision of where to take the language.


Fortifying the process against feature bloat

As promised, I’ve created a separate topic for the bloatness prevention meta-discussion. May I suggest we move further comments related to that to Fortifying the process against feature bloat?

1 Like


I’ve wondered before whether there’s a fundamental difference between “carried” effects like async/try and “erasable” ones like const/unsafe. I can totally imagine how polymorphism for the latter might work, but I have no idea how to abstract away differences like for_each vs try_for_each



Still, not stating if even the general direction would be a good idea in any way.

But an idea how it maybe could be implemented with current or planned Rust features. Here I’d specifically rely on specialization. I don’t know in which exact state it is now or if all the things here will be allowed and I never used the syntax, so it’ll be probably wrong according to the current proposal, but it’s about the idea.

  1. Let’s create a trait that defines how a type acts in regards to carried effect, or if it carries anything at all.
trait Propagate {
  // Hmm, can I have generic associated type?
  type OuterType<T>;
  fn should_bail(&self) -> bool;
  fn to_bail<T>(self) -> Self::OuterType<T>;
  fn from_good<U>(u: U) -> Self::OuterType<T>;
  1. Define it for all „normal“ types
default impl<T> Propagate for T {
  type OuterType<U> = U;
  fn should_bail(&self) -> bool { false }
  fn to_bail<U>(self) -> U { unreachable!() }
  fn from_good<U>(u: U) ->U { u }
  1. Implement for Result (for simplicity just for that one for now)
impl<T, E> Propagate for Result<T, E> {
  type OuterType<U> = Result<U, E>;
  fn should_bail(&self) -> bool  { self.is_err() }
  fn to_bail<U>(self) -> Result<U, E> { Err(self.unwrap_err()) }
  fn from_good<U>(u: U) -> Result<U, E> { Ok(u) }
  1. Use it:
fn do_stuff<R: Propagate, F: FnOnce() -> R>(f: F) -> R::OuterType<usize> {
  let res = f();
  if res.should_bail() {
    return res.to_bail();

I know, ugly, unergonomic, maybe something is not allowed or implemented right now… but my point here is, there a chance one could implement such a thing by just the type system?



I have to say that I’m also not a terribly huge fan of the proposal. There is a proliferation of (proposed) keywords that all essentially do the same thing: return, ?, break, yield, await, fail, pass, expression in final position and, arguably, panic!. They all pass a value from one function, closure, co-routine or block to another.

They differ only by describing which “pathway” to take upon yielding control. Depending on the context there are several such pathways:

(try) fn         result or error
generator fn     yield value, yield error*, final value or final error*
async fn         not ready, value or error
stream fn        not ready, yield value, yield error*, final value or final error*

(* Incidentally, in generator-like closures which one will ? result in - yield error or final error?)

Now, there is a recent proposal to add Return-like error handling to C++. Since this is used for return values only, the discriminant doesn’e need to be explicitly stored, but can be passed along using a register or CPU flag.

Going further, the function could even return to different places in the caller or co-routine, depending on which pathway is chosen (this would add some call overhead, but simplify result checking). Essentially, this would lead to generic state-machine handling that could be used for all of the above cases.

Even though I acknowledge that this is not feasible immediately, I would be happier if there was a visible path towards a unification like this, rather than just adding one keyword after another for what are really similar use-cases.



Note that that’s just an ABI question. Thanks to changes like, Rust can also do that! As a simple example,

pub fn foo() -> Option<usize> {

is just

	mov	eax, 1
	mov	edx, 12345

passing the discriminant – and payload – along in a register, not writing it to memory.

(And it’s a general optimization, not something Option-specific.)



This can be rather straightforwardly implemented as a macro. This example doesn’t support edge cases like closures, recursive usages of return and macros that return, but those edge cases could be implemented, I think (other than I guess returning macros).

macro_rules! __try {
    ([@process_tokens] $c:tt [$($retrieved:tt)*] return $ex:expr ; $($rest:tt)*) => {
        __try!([@process_tokens] $c [$($retrieved)* return Ok($ex) ;] $($rest)*);
    ([@process_tokens] $c:tt [$($retrieved:tt)*] return $ex:expr) => {
        __try!([@process_tokens] $c [$($retrieved)* return Ok($ex)]);
    ([@process_tokens] $c:tt [$($retrieved:tt)*] throw $ex:expr ; $($rest:tt)*) => {
        __try!([@process_tokens] $c [$($retrieved)* return Err($ex) ;] $($rest)*);
    ([@process_tokens] $c:tt [$($retrieved:tt)*] throw $ex:expr) => {
        __try!([@process_tokens] $c [$($retrieved)* return Err($ex)]);
    ([@process_tokens] $c:tt $retrieved:tt ( $($in_parens:tt)* ) $($rest:tt)*) => {
        __try!([@process_tokens] [@() $c $retrieved $($rest)*] [] $($in_parens)*);
    ([@process_tokens] $c:tt $retrieved:tt { $($in_parens:tt)* } $($rest:tt)*) => {
        __try!([@process_tokens] [@{} $c $retrieved $($rest)*] [] $($in_parens)*);
    ([@process_tokens] $c:tt $retrieved:tt [ $($in_parens:tt)* ] $($rest:tt)*) => {
        __try!([@process_tokens] [@[] $c $retrieved $($rest)*] [] $($in_parens)*);
    ([@process_tokens] $c:tt [$($retrieved:tt)*] $token:tt $($rest:tt)*) => {
        __try!([@process_tokens] $c [$($retrieved)* $token] $($rest)*);
    ([@process_tokens] $c:tt $retrieved:tt) => {
        __try!($c $retrieved);
    ([@() $c:tt [$($retrieved:tt)*] $($rest:tt)*] [$($result:tt)*]) => {
        __try!([@process_tokens] $c [$($retrieved)* ($($result)*)] $($rest)*);
    ([@{} $c:tt [$($retrieved:tt)*] $($rest:tt)*] [$($result:tt)*]) => {
        __try!([@process_tokens] $c [$($retrieved)* {$($result)*}] $($rest)*);
    ([@[] $c:tt [$($retrieved:tt)*] $($rest:tt)*] [$($result:tt)*]) => {
        __try!([@process_tokens] $c [$($retrieved)* [$($result)*]] $($rest)*);
    ([@find_brace] $found:tt { $($inner:tt)* }) => {
        __try!([@process_tokens] [@finish $found] [] $($inner)*);
    ([@find_brace] [$($found:tt)*] $t:tt $($rest:tt)*) => {
        __try!([@find_brace] [$($found)* $t] $($rest)*);
    ([@finish [$($found:tt)*]] [$($code:tt)*]) => {
        $($found)* {
macro_rules! try_fn {
    ($($tokens:tt)*) => {
        __try!([@find_brace] [] $($tokens)*);

fn early_return() -> bool {

fn is_error() -> bool {

struct MyError;

fn check_another_error() -> Result<(), MyError> {

try_fn! {
    fn foo() -> Result<u64, MyError> {
        // returns Ok(10), i.e. happy path is Ok autowrapped
        if early_return() { return 10; }
        // returns `Err(MyError)`
        if is_error() { throw MyError; }
        // `?` works as well

fn main() {
    println!("{:?}", foo());

closed #151

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.