Automatic "?" inside try {...} blocks

Can someone please point me to discussion of applying “?” automatically to all Result returning function calls within try {…} blocks? I’ve spent some time digging through RFC discussions and posts here in this forum but to no avail.

To clarify, I’m referring to the idea where you could write this…

try {
    let mut f = File::create("foo.txt");
    f.write_all(b"Hello world!").extraWork()
}

… instead of this:

try {
    let mut f = File::create("foo.txt")?;
    f.write_all(b"Hello world!")?.extraWork()?
}

What if I want to save off the result and match on it in a separate statement?

1 Like

Nooo, please don’t do this! The point of ? is exactly that it removes most of the noise associated with error handling, yet it remains explicit and we retain the ability to know which expressions can fail. Making it implicit would be a huge step backwards, it would be impossible to tell based on the syntactic structure of code where errors can come from. That is a very bad tradeoff for shaving off one single character.

This also has a consisteny issue: if it’s allowed for function calls, is it allowed for other expressions too? If it’s not, it’s an ugly special case. If it is, well, that’s even worse, for the reasons enlisted above.

Not to mention other, more severe problems of composability: this proposal would also completely prevent all Result-yielding calls (or other expressions) from being used in any other manner inside try, which is highly undesirable, to say the least.

Please, just type that ? out.

33 Likes

I don't think there's been much of one.

That said, you could probably just replace async with try and await with ? in

(Personally, I think I prefer the explicit-? and explicit-await versions.)

1 Like

Unsurprisingly, people have brought up the very same problems in the context of implicit await: it looks like one thing but does another, and it makes futures/async fns impossible to consume in a non-awaiting way. (I agree that those are equally serious problems in the case of async/await too.)

1 Like

Why even have implicit await? Is there any reason at all to prefer that over an explicit await! invocation?

1 Like

Laziness, I guess? I don’t mean to be rude, but honestly I don’t really see any other genuine reason for it. I encountered the “less typing” argument worryingly often these days: there was a proposal for turning .into() calls into a postfix ! operator “because it’s less typing”; there was the implicit await discussion which again boiled down to the “it’s common so I should not need to type it” argument, and many other similar proposals have been going on. Unfortunately, these seem to exclusively consider the marginal improvement in the length of the resulting code, while mostly ignoring other, serious issues such as actual code legibility.

3 Likes

Maybe it’s just me, but kind of useless bikeshedding is to be expected from a growing community that is open to this extent. A subset of N out of M people really grok the deep stuff (e.g. GATs or macros 2.0) and around that subset is a larger subset O out of M of enthusiastic but less knowledgeable people (at least in those particular topics). That larger group of O people is automatically biased in the direction of bikeshedding, as they would like to contribute (as per their enthusiasm, which is really great to have), but are not really able to in that capacity, if we’re being honest.

Not that I disagree; a lot of current proposals and pre-RFCs that I see related to syntactic changes can go straight into the trash bin and the language is better for it. And typically it’s quite easy to see how inexperienced the RFC writer in question is, as the RFC itself will then not even discuss e.g. more involved but further-reaching consequences of the feature. I don’t mean inexperienced in the art of writing an RFC, mind you; I’m referring to inexperience with designing programming languages. What inexperienced programmers often fail to grasp is that code terseness, while useful (who likes boilerplate, right), should never be the goal. Rather, I think it is more useful to think about expressive power, which is a different thing entirely. For example, the expressive power of the match expression is through the roof, but usage tends to be not-quite-terse. A match use may not typically consist of a single 1-liner, but that doesn’t matter since it expresses the solution to a problem really well. I think that this is also an example of something more general: Programmers know the value of everything, and the cost of nothing.

There is a thread somewhere (either here or on users.rust-lang.org, I forget which) about raising the bar for such changes. I wholeheartedly agree with that. In fact, I think there are a few tests each new language feature should undergo by default, and perhaps should even be included in new RFC’s from day 0:

  • An orthogonality check: Good language features tend to be orthogonal to others, allowing for clean composition. Non-orthogonal language features tend to have unexpected or even nasty corner cases. A good example of a non-orthogonal feature is the fact that it’s legal in Rust to write Foo { x, y, z } rather than Foo { x: x, y: y, z: z }. Specifically it conflates naming a field to bind to with the value to bind it to, which I find conceptually perverse. Yes the latter form duplicates a few tokens, but the x’s on either side of the colon have differing semantics, so it’s not really duplication. And in addition, the value named x could have an entirely different local name binding or even be an expression, in which case there is no duplication at all.
  • A cost/benefit analysis: surface changes to syntax in the category “boilerplate reduction” have a tendency to cost more than they actually return in terms of value. This goes double for Rust due to the existence of various kinds of macros, allowing users to create their own DSL’s to cut down boilerplate.
    An incapability to write such a DSL on the part of a programmer should not mean punishing other Rustaceans by imposing extra complexity on them. Nor should it mean dealing with a mess akin to what I’ll call the C++ dialect problem, where every project using that language needs to set ground rules about 1. which language features (not) to use and 2. syntactic style). Note however how the C++ dialect problem came to be: unbridled introduction of new “features” that in effect make the language wholly incomprehensible to even C++ experts in the sense that they cannot hold the entire language in their head all at once, as can be done with e.g. C, Python and Lua. Hence they choose a subset appropriate to the project. In this regard, Rust is arguably at or even already over the edge, which is only more reason to be careful with new language-level features.
  • Perhaps if there were some way to communicate that surface level syntax changes are less likely to be accepted, as well as why that is, this would discourage such low-value RFC’s. I’m less clear on how to achieve this though, as new users don’t even always seem to read the Rust manual or other docs before posting. So putting the information there wouldn’t necessarily have an impact in this regard.
4 Likes

@jjpe @pythonesque Your thoughts sound like good topics for a new thread (but perhaps I’ve posted my question in the wrong place? If this forum is for proposals only, I’m happy to post my question somewhere else.)

Here’s some context for my question:

I’ve been reading recent RFCs (pointed out by This Week in Rust) about error handling mechanisms such as try/throw/catch. I’m finding this to be an innovative hybrid of regular return value and exceptions, which seems like something genuinely new that Rust brings to the language space. So as a personal project I’ve been exploring in what ways a try/catch-like mechanism absolutely must vary due to fundamental differences between exceptions/return values as opposed to the vastly greater number of similarities.

In the case of code within try{…} blocks, I wondered why the need for ? was retained because it seems redundant and a bit of a wart left behind from the process of evolution.

@sfackler In all the cases I’ve thought of where I’d use a try block, the Err result would be used/handled at the end. Do you see them frequently used in combination?

@scottmcm thanks for point me to the async - that’s a feature I really haven’t had time to explore yet!

(also cc @sfackler)

Not to hijack this thread to have this exact same discussion again for the umpteenth time, but this is completely false for implicit await and it's equally false for implicit ?. Everything is still expressible; the only thing that's changed is where the explicit annotations go.

Under the "implicit" version, when you want to consume an async fn or a Result-returning-fn in a non-await or non-? way, that's where your annotation goes. You write let future = async { some_async_fn() } or let result = try { some_try_fn() }. This is why the thread is titled "explicit future construction."

Personally I could go either way on the "implicit ? in try" idea. On the one hand I think it's neat, and it loses zero expressive power. On the other hand, early exits are more more important than suspension points, Result-returning calls outside of try are relatively common, and everyone's already used to ?, so meh. Maybe I'll try it out in a toy language or something.

The reason I wrote up that proposal for async is emphatically not "too much typing." It's to avoid the confusion, pointed out by the designers and users of async/await in other languages, that arises from function calls that don't run the callee's entire body. Under the current system, an un-annotated function call doesn't run any of the function's body! Under "implicit await," an un-annotated function call always runs the callee to completion, regardless of the context, and the exceptional case of delaying it is what gets annotated.

There are totally valid arguments to prefer explicit await. So if you want to argue against it, please use them instead of non-issues.

5 Likes

I don't see try blocks used anywhere right now. One example of matching an error directly is to e.g. treat NotFound errors differently from other errors when opening a file that may not exist.

I don't understand how this would be implemented. Would a ? be injected immediately around anything that returns a Result? Will this work or not? foo().map_err(|e| SomeOtherType::new(e, "hi")).bar()

I suspect that, like the implicit await proposal, it would work best by applying only to try fns. Which, since we don’t have those yet but already have countless Result-returning functions, makes it somewhat harder to come up with a smoother transition path than “hey everyone rewrite your fn() -> Results to try fns.”

I doubt anyone disputes this; Programming Language Technology is a scientific field of computer science -- of course, those who have studied this field will be able to participate more in more technically advanced discussions.

There are whole academic papers purely about reducing boilerplate, some examples are:

These papers are written by people with a lot of experience in designing programming languages. (Yes, they are mostly about library design in this case.)

I find that terseness, where used to remove all the plumbing that is unimportant to the problem being solved, is beneficial for readability.

Indeed pattern matching in the form of match is adds massive expressive power, but that does not negate the need for simpler constructs as syntactic sugar for extremely common usecases. Indeed, we could eliminate if cond { expr } else { expr }, if let A(x) = expr { .. }, and even let bindings and only have match without losing the ability to express anything. It would just get more cumbersome.

Another example is loop { .. }, while let and for. You could similarly get rid of the two latter forms and only retain the first one proviso that you have break (which we do..).

So clearly, we have historically thought that syntactic sugar, where helping readability and being sufficiently common, should be introduced, and that it matters.

Of course, if the language was flexible enough such that you could have mixfix operators like in Agda, as well as opt-in lazy evaluation at some points, you could model all of these constructs in libraries, but we are not there and proc macros are too heavy weight and costly to design to make EDSL-writing pain free.

In this respect, good language design will decompose features into more general constructs. If you are able to take a language with more specialized features and apply some desugaring to them into a core language, then the interactions become much easier to reason about. SPJ talks about this in Into the Core - Squeezing Haskell into Nine Constructors.

This is still pure syntactic sugar and simple to reason about; just desugar into the latter form and continue on from there.

I think it is fashionable to knock C++, but if we are being honest, I think this is also true of Rust, and of almost every mainstream programming language. This is not necessarily a problem in my view; There will always be some specialized features (particularly wrt. memory layout and such in Rust) that only some people care about. I don't think you need to hold an entire programming language in your head; in this respect, programming languages are like natural languages in that no one actually knows a whole language.

First you need to show that this is actually the current policy or change the current policy. I don't think we should reject proposals that introduce new sugar which solves pervasive problems and improves readability across a large number extent of the ecosystem.

I don't think this particular proposal re. try {..} solves a problem (and even causes problems..), but there certainly are those that do.

@sfackler I probably should have written “Do you envision them frequently used in combination?”

I should read more about what’s been discussed for async. When I was picturing how it would work, I was imagining the compiler inserting an implied “?” anywhere the returned Result was not already explicitly being used.

No, that is completely false. If you have a bunch of fallible and infallible calls within a try block, you lose the information as to which ones are fallible. You lose the distinction. This proposal did not suggest annotating non-? handling of Results. If you assumed it suggested a sort of "inversion" of annotations inside and outside try blocks – that's not what it did. (Even if it did, I would oppose that direction too, but for other reasons.)

See above. Also there is one language where try-catch was designed to be used with explicit annotations for each individual function call. In Swift, a do…catch block requires all throwing calls to be marked with try – the syntax is different, they use do instead of try and prefix try instead of postfix ?. The reasoning behind that design decision was exactly the same as mine: not requiring these annotations would lose information, just not all of it.

Please refrain from 1. putting words in my mouth, and 2. crafting strawman arguments. The issues I brought up are issues — if you don't understand why they are so, I can't help.

1 Like

Yes, this is true. It's also something I acknowledged in the implicit await proposal, and further (because early returns are a bigger deal than suspension) gave as a reason that this model might not work as well for try as I think it does for async.

It didn't need to propose it explicitly- it's inherent in try blocks themselves. If you need a Result, you can wrap the call in another try block and get one back, just like the implicit await proposal.

Thus, the idea that implicit await "makes futures/async fns impossible to consume in a non-awaiting way," or the idea that this supposed impossibility extends to "implicit ?" (including your pre-edit claim that I ignored this), is false.

That's all I was ever saying- no strawman arguments, just pointing out that the implicit versions don't take away your ability to work with Results/Futures inside their associated contexts.

1 Like

@H2CO3 thanks for pointing out Swift - I wasn’t aware that they had adopted a similar hybrid model.

I can see the merit in requiring explicit “?” to identify places where changes to control flow could happen. Certainly in C++, part of what makes exception safety tricky for newcomers is remembering that exceptions can come flying through pretty much any function or operator.

That said, I like Stroustrup’s observation that people often want loud/noisy syntax for new features and more concise/quiet behaviour for features that are familiar. In this case, I wonder if requiring redundant “?” in try blocks would become a regret as these features become old and familiar.

But I will read more about async and how Swift does things.

I don't think the analogy is correct in this context. The use of ? is not more redundant than the noexcept specifier in C++. They are equally informative. While ? informs you that an error might be propagated, noexcept informs you that there's no exception to propagate.

I don’t see a strong comparison to be made between Rust’s ? and C++ noexcept - their functionality is not related. Can you clarify what you meant?

3 Likes

That sounds... literally impossible. As soon as you call a method on the return value of a Result-returning method (or give it to a generic function), the compiler would have no way of knowing whether you intended to do this to the result or the Ok value without a type annotation.

3 Likes