i believe this is a bad idea,(sorry for the language in advance, i’m not a native speaker)
first, there will be two flavours of code to be seen. the old kind, which shows what actually happens, and the new one, which makes people coming from languages with exceptions believe there are some rust exceptions, but this has actually nothing to do with exceptions in other languages.
second, languages like c++ are going in the direction of rust(std::option, std::expected), for good reason. rust does steer people to a more local error handling - which is much better than catching everything in main in my opinion.
third, i’m afraid that most people who use rust, don’t have the time to look at such ideas, and might be totally surprised if such changes happen - so please don’t rush such things. make it visible for everybody and try to get as much people comment on this as possible. i’m a bit afraid of rfc opinion bubbles.
.
This is just a discussion of this general area of language design so far... An official RFC has yet to be filed. Generally, the Rust team and community does a really good job of getting community feedback about new features.
There is also somewhat of a long path ahead for these ideas before/if they make it to the language too. Someone has to open an RFC on GitHub - rust-lang/rfcs: RFCs for changes to Rust. Then there will be more discussion. If the feature is accepted, it is implemented on nightly rust first, and people can play around with it and modify it for a while. Eventually, it would be stabilized. At any point during this process, the proposal could be changed or rejected.
The result is a great language that makes people happy and productive
This is one most valuable aspects of Rust as language for me.
I feel like it's often overlooked or taken for granted and/or unbreakable.
While in reality it needs to be looked after, protected, hardened.
It's one of the things that differentiate Rust from so many other languages.
It's what gave me the confidence to go all-in on Rust (coming from C/C++/Swift) when I found out about it as a language and trust that it will not turn into one of those many "everything and the kitchen sink"-languages.
Yes, Rust is a complex language. But no, Rust is not a complicated language.
Rust's syntax as well as semantics are based upon a small and easily memorizable set of orthogonal components.
Let me give an example.
(Please excuse the length and what might seem like going off-topic for a bit. It's not. promised. )
(For the sake of this example allow me to assume that this Pre-RFC ("unnamed struct types") had landed by now.)
In Rust there is a hierarchy of types:
Unnamed types (tuples)
An unnamed type can have zero, n indexed or n named members:
()
(T, U, …)
{ t: T, u: U }
Named types (structs)
A named type is a combination of one of those three kinds of unnamed types, plus a name:
struct Unit (Rust omits the () here)
struct Indexed(T, U, …)
struct Named { t: T, u: U }
Once you know tuples, you know structs.
They can be thought of as "just named tuples".
Sum types (enums)
A sum type is a combination of one or more named types:
enum Foo {
Unit,
Indexed(T, U, …),
Named { t: T, u: U },
}
Once you know structs, you know enum variants.
They can be thought of as "just unions over structs",
that need to be pattern matched to access.
While this example may seem completely unrelated to the topic at hand.
Once you know how to pattern match on tuples,
you know how to pattern match on structs:
match Unit {
Unit => …,
}
match Indexed("indexed", true) {
Indexed(name, ..) => …,
}
match (Named { name: "named", flag: true }) {
Named { name, .. } => …,
};
Once you know how to pattern match on structs,
you know how to pattern match on enum variants:
match Foo::… {
Unit => …,,
Indexed(name, ..) => …,,
Named { name, .. } => …,,
}
By making use of carefully chosen combinations of orthogonal language features
Rust allows one to apply very few rules (with almost no exceptions!!!)
to a broad range of uses, such as:
Every type can be destructured/pattern-matched.
Tuples, structs, enums. No secret sauce necessary.
Every type can implement functions.
Tuples, structs, enums. No secret sauce necessary.
Every type can be used as reference or value type.
Tuples, structs, enums. No secret sauce necessary.
Every type can be used as mutable or immutable value.
Tuples, structs, enums. No secret sauce necessary.
Every type can be <enter characteristic here>.
Tuples, structs, enums. No secret sauce necessary.
Mutability and reference-typed-ness are realized through composition, rather than by introducing special cases. Same goes for memory management, atomicity, … you name it.
… see a pattern here? Rust is composition all the way down.
It's these clearly visible patterns and the lack of exceptions to most rules that make Rust,
while with no doubt one of the more complex languages, such a joy to learn, teach and use.
Learning patterns scales linearly by O(n + m). ( Rust, a complex language)
Learning cases/exceptions scales by squared by O(n * m) ( Swift, a complicated languages)
Instead of learning how to destructure/pattern-match, call functions on, borrow, … tuples,
and then having to learn a completely different set of rules for structs,
and yet more and utterly different rules for enum variants,
one just has to learn it once for the lowest abstraction and then build the higher ones from it.
I don't know any other similarly complex language with such a rich set of type-flavors
that still manages to not require one to memorize a bunch of exceptions and sugars for every single one of its rules/parts, as is the case for most other non-trivial languages.
So what does all this have to do with catching functions, again?
This:
Syntactic sugar that simplifies the use of some things, at the cost of obfuscating their true nature have the ugly downside of making it impossible for the observer to see the language patterns that one would otherwise find easily. Without patterns one is forced to memorize every single configuration separately as individual cases, again blurring the semantics that unify them. This is what happened to Swift.
Catching functions —I fear— risk becoming one of those sugars, that do more harm than good.
I've seen a number of comments along this line in this thread, and I'd like to request that those making them take a closer look at what they dislike about exceptions and elaborate on how the proposal is changing what rust currently does in those areas.
For me, I like Result+? over, say, C# exceptions because:
The possible errors are explicit in the type of the function
All the places that can fail are clearly marked with ?
The return value is a real type, which can be saved or restored, put in containers, passed to methods, etc.
There's nothing in this proposal that changes either of those things, so I don't understand the doom and gloom that seems so commonly expressed.
It probably could, as try a? + b? is somewhat reasonable, but I think forcing the block is easier and more forward-compatible than trying to pick exactly what the precedence would need to be.
Well, here's a trivial and obviously-contrived function that's "infallible":
The existing ? lets me explicitly mark things as fallible when I want the by-far-the-most-common interpretation of "continue along only when successful". With "function-level-?" (or whatever this becomes), it lets me say the same "the 'normal' path is the successful one" for the entire function.
main is successful when it reaches the end. #[test]s are successful when they reach the end. Infallible functions are successful when they reach the end. -> ! function are, in a sense, never successful because they never reach the end. Most fallible functions using ? are also successful on reaching the end today; this would just help solidify that.
(Part of me even wants to double-down on this and only allow ? and throw in "catching functions", like how C# only has await in functions that are async. That way they're one "continuing along means success" feature. But that's a more hardline version than I expect would get traction.)
Actually, I think my knee-jerk reaction was premature. I think it's fine for |x| catch { ... } to be function-level catch because
Most of the time they do the same thing,
A function body containing only a catch is pointless, and
Someone could do |x| { catch { ... } } if they really wanted to.
That could go wrong once macros are involved. You'd always have to pass the catch block inside { ... } to macros to ensure you get expression semantics.
One thing we could do, which is potentially really contentious (…so hear me out haha) is leave ? solely for non context-based situations, put no annotation on catching functions inside catch blocks, and instead put the annotation on the case where you want a Result value in a catching function.
So you could write any of these (bikeshedding aside, I’m just demonstrating the non-usage of ?):
// traditional style
fn try_f() -> Result<T, E> {
let x: T = try_g()?;
if fail() { Err(e)?; }
Ok(x)
}
// catch context inside the function
fn try_f() -> Result<T, E> {
try {
let x: T = try_g();
if fail() { throw e; }
x
}
}
// catch context as the function
try fn f() -> T {
let x: T = try_g();
if fail() { throw e; }
x
}
// when you actually need a Result
try fn f() -> T {
let x: Result<T, E> = <some annotation> try_g();
...
}
Now, the reasons I think this might actually work out: It solidifies the distinction between the two contexts (like C# async functions), preserves the distinction between early-return and non-early-return, and is backwards compatible.
Finally, two analogies in support of this idea: unsafe blocks and Kotlin coroutines.
I’ve compared this to unsafe before- you don’t annotate every single unsafe operation, you just use them in an unsafe block or function. The context itself is enough. Notably, you can also write unannotated safe operations in an unsafe block and that doesn’t seem to be an issue either.
Kotlin’s coroutines work this way and they look really nice. Like C# async/await, they are a compile-time state machine transformation. But instead of littering awaits (or await!()s) everywhere, they just work via contexts:
// some coroutine, aka a Generator
suspend fun do_some_io() { ... }
fun non_coroutine() {
// do_some_io() // ERROR: unlike C#, can't accidentally fire off an async op in a sync context
runBlocking { // runBlocking() takes a suspend fun closure and runs it to completion
do_some_io() // this is implicitly awaited
val x = async { do_some_io() } // async() takes a suspend fun closure, launches it, and returns a future
x.await() // await() is just a suspend fun (which is thus implicitly awaited here)
}
}
Here, runBlocking is similar to ? in that it lets you call a “special” function in a non-special context; implicitly awaiting is like implicitly try!()ing; and async is like the annotation to give you a Result from inside a catching function.
I’m a huge fan of this for async/await because await!()-heavy code is really noisy, so I thought I’d sketch out what it looks like to apply the same idea to Result. I kind of like it!
Interesting idea. If I am understanding correctly, you mean that we would not be able to see where errors are potentially propagated from inside a try block? (but rather it would be sort of implicit?) If so, that feels like a big step backward -- I frequently rely on ? to understand control flow when reading code. I am also not entirely sure how we would setup the rules at all to work this way (does it apply to any function that returns a result?)
That's correct- I justify that to myself by comparison to unsafe (not sure which operations are unsafe) and by comparison to Kotlin (not sure which operations can suspend), since those both seem to have worked out, but error handling certainly could require different tradeoffs.
No- only to try fns, which may or may not even look like they're returning a Result depending on how that bikeshed gets painted. (Much like you can only call Kotlin suspend funs from within other suspend funs, and must use something like async or runBlocking otherwise.)
This is why i feel like the name unsafe is a little bit unfortunate. In my mental model its not even about an unsafe code block. Its not that Rusts safety is turned off here, i can just do more than i am usually allowed to here. Therefore i don't think this analogy works really well in this example. Please don't take away my lovely ?s it is such a step forward.
And to get a bit more on topic i strongly resonate with @regexident and introducing to much syntactic sugar and hiding essential basics of the language can fire backwards. I have made roughly the same experience with Swift at my workplace. It just so happened that it spread at my coworkers that work on iOS roughly the same time i get interested in Rust. Despite the fact that they use Swift a significant amount of time at work and i have just a few hours for Rust in my spare time ... i felt like i had managed to understand Swift better with the knowledge of Rust than my coworkers in the early days of our Swift endeavor. The fact that Swifts ?s are nothing more than enums and that i showed them how they work by just creating custom Optionals by responding: "WOW, cool – thinking – nice to know!" was a little bit concerning. And Swift has a ton of sugar and at least to ways to express the same thing (and maybe don't let the user know its the same thing) that makes me really appreciate what Rust gives me today. I am really in favor of -> Result<String, String> and putting catch just anywhere (in front of the function like #async or behind the return type or any version others have suggested). I have to say -> i32 catch Error does look nice (i often work with Java) but for the things i love Rust i hesitate to want this in the language
The problems with Swift seem to have more to do with the fact that once you have an Optional you can’t just destructure it the same way you can with a normal ADT. This is not true with this proposal - a catching function returns a Result, which is still just a library type like any other.
That's strictly not true, actually. You can drop this in a Swift playground and see that it works just fine—and identically with spelling out an optional type explicitly:
let foo: String? = nil;
switch foo {
case let .some(str): // or case .some(let str):
print("HEY, \(str)")
case .none:
print("YO")
}
And this is precisely the point I've been concerned by. It's extremely easy to develop the wrong mental model of what's happening when certain kinds of sugar are introduced. That doesn't in and of itself make the proposal here bad, of course! But I think it shows how easy it is for these kinds of syntactical sugar to have some unexpected downsides in terms of what they communicate even among really sharp and well-informed people.
So I’ve never used Swift and my entire impression is from reading tutorials on Optional.
What seemed like the problem to me was that if let doesn’t do a pattern-based destructuring, but just lets you do if let x = foo instead of if let Some(x) = foo, and that this is the idiomatic way to process the value. We would not propose that you could destructure a Result except by pattern matching. This is the big difference from this proposal, which makes comparisons to it seem to me to be of limited relevance.
That doesn’t seem to give you that much. Instead of reusing the mental
model people have build for ?, inside of try-functions, they now get
a pretty much inverse behaviour for try-functions.
As @chriskrycho already remarked this is wrong, as described in my first response:
No. Swift's if let foo = <expr> { … } is no more than syntactic sugar for if case let .some(foo) = <expr> { … }.
The former has been pushed as the idiomatic way of dealing with Optional<T>, obfuscating their actual semantics. As a matter of fact up until Swift 2, iirc, one could actually do if let foo = <expr> { … } with any enum that had a .Some(T) and a .None case, which I considered a feature (it generalizes), not a bug. They "fixed" it regardless.
There is no Result<T, E> in Swift. (Which is bad too, imho. But that's another topic on its own.)
The fact that you got the impression that one could not pattern match on Optional actually completely supports the point I'm making.
You in fact just witnessed it yourself. By reading tutorials on Swift's Optionals you ended up with a wrong understanding of the language. That's my whole point here.
It's precisely why I consider this RFC troubling to say the least.
@withoutboats, please read my two responses a second time, if you can spare the time.
As of right now 40 people seem to agree with them. So clearly no "limited relevance".
It's even more radical proposal than one in the OP. I think we don't get much from hiding Result type and it makes extension to Option and other types through try trait even more difficult. And hiding ? does not help either.
I think this syntax can be an acceptable middle ground:
try fn foo() -> Result<T, E> {
let x: T = try_g()?;
if fail() { throw e; }
x
}
try fn bar() -> Option<T> {
let x: T = try_g()?;
if is_none() { throw; }
x
}
try can be replaced with catch or something else, same with throw. This variant does not hide enums and makes integration with other types trivial through Try trait. And it allows Ok wrapping, through explicit notion of try fn, which with other modificators like async fn (and maybe others in future?) notifies user that this function operates under slightly different rules compared to the usual fn.
For what it's worth, I didn't intend to propose that aspect at all. It works equally well with try fn foo() -> Result<T, E>. My intent in removing ? was twofold:
To double down on the "context" mental model based on the analogy to C#, where await is only available in async methods. ? would map to Task.Wait in that model- a reasonable way to extract the value when not in the context. (This also removes the redundancy between Err(e)? and throw e.)
To address the explosion of noise in async functions that also use the "context" mental model (IINM, withoutboats would like the two syntaxes to match). This is something people are worried about, since they keep e.g. proposing that ? mean both "try" and "await."
But as nikomatsakis pointed out, ? does have a stronger effect on control flow than await!, and as dan_t pointed out people already have a mental model for it in Rust. So I'm not hugely opposed your sample code either- it's certainly the best of the other options IMO.
sounds like a good proposal, but forces to use extra keyword in the beginning of method, addition of visibility making method declaration pretty lengthy
pub(crate) try fn foo() -> Result<T, E> {
let x: T = try_g()?;
if fail() { throw e; }
x
}
"?" is actually "try" in rust , is it possible to use it in method declaration like:
fn foo()? -> Result<T, E> {
let x: T = try_g()?;
if fail() { throw e; }
x
}
I'm not sure this says as much as you are saying it does. I think that @withoutboats basically got to the heart of it, actually, in this comment:
In particular, they were pointing out that the precise sugar that Swift settled on has two parts. The first is expression Option in terms of ? and so forth. The other is letting you write if let foo = <expr> -- which completely hides the Some variant constructor (in Rust terms) and looks like a kind of "identity operation" (when in fact it is doing destructing). In other words, the problem may not be "syntactic sugar writ large" but rather the specific choices that Swift made.
That is, if I understood, @withoutboats was trying to point out some specific differences between how Swift handles things and how this proposal works that they feel might make the difference.
Put another way, in Rust, when you get the Result, you still need to use the ? operator to "discharge and propagate" the error. So now as you learn more about how errors in Rust work, you can map ? to this operation -- whereas if there were no syntactic mark at all, it may be surprising that in fact a match is taking place.