Pre-RFC: flexible `try fn`

In that case, I will also have to admit that I find your comment a bit frustrating too, because it sounds as if it automatically dismissed opposite opinion as immature or short-sighted.

While this may sound appealing, there still are a few questions in which the right thing to do is a firm decision. I think a core value judgment in language design is perhaps one such exception. Drawing a line in the sand (after proper consideration and design, of course) is better than endless hesitation and debate about alternatives, continuous wandering in the design space while accumulating technical debt, or settling on compromises just for the sake of compromise, which are then bad for both parties. In this regard, a solid decision is both acceptable because nobody is forced to use Rust (those who don't like its fundamental ideas can try it out then leave), as well as necessary because in the lack of a very explicit direction and goals, the language won't go anywhere.

I didn't question that, although I feel that there are many users, RFC authors, etc. who, even if aware of the problem, don't particularly care about it, because it is at tension with their personal favorite feature being added to the language.

That's assuming that accumulating features is necessary or mandatory. However, it's neither. Even you cited some examples yourself (e.g. Go) which prove the opposite.

But even monotonic growth can be handled well – the key distinction is between "monotonic" and "unbounded". Unbounded growth in a software system leads to an inevitable decline of quality. So a possible solution to this quality problem may be simply not adding many significant features, instead asymptotically approaching a supremum of complexity, possibly even restricting maintenance to fixing bugs (there are still plenty of them in the compiler, and an escalating language complexity doesn't help with it).

And what about users who started using and loving the language for what it was earlier? Especially in the light of there existing several other languages out there, with features which are not available in Rust. If one likes programming with such features more than Rust's approach, they could just use those other languages instead of Rust. In other words, there's not much point in changing a language with the goal of making it more similar to another language. It's also by itself a strange idea to suddenly shift the target audience from those who liked Rust for its original profile, toward those who would like it for its future similarities to other languages, or for a completely different paradigm that they hope would be adapted by Rust, and who are consequently pushing it in that direction.

Incidentally, I think a new paradigm could even warrant the design and implementation of a new language — but why take away the existing language from its current users who like it just the way it is (give or take a few rough edges)? We can't possibly cater everyone anyway, and I don't think that would be a healthy goal.

On a related note, someone has mentioned it a couple days ago, but I also find it very strange and backwards that the burden of proof is on those who want to keep the language on the current track, and not on those who want to change it. We consantly keep having to defend the position that Rust treats specific constructs the way it does for good reasons, which have been discussed and evaluated thoroughly, which work well in practice, and which were chosen to be different from other languages because they are a better solution than what is found in other languages. And if we get tired of this endless fight, we find ourselves in a situation whereby the language suddenly changes under our feet without good reason.

Apart from how frustrating that situation is, it also makes it impossible to build a solid foundation upon the language. If idioms and best practices change sharply and suddenly, then code written today will become technical debt tomorrow. New programmers won't understand why something was done differently in the "old days", because the book, the community, and lints will teach it exclusively the new way. Even with a technically upheld "stability guarantee", this would pretty much undermine all practical attempts at stability and long-term maintainability. Maintainability and stability of code doesn't stop at "it will keep compiling". It includes "it will stay easy to understand and modify by future generations" as well.

Radical changes to the language such as this one can't really be considered "filling out". I've said this before, but in my opinion, Rust would benefit a lot more from actual filling out, i.e. the implementation of accepted features, the refinement of existing ones, and above all, bug fixes. The act of changing the core idioms would effectively hijack the language.

Yes, that's exactly what I was trying to describe above. Proposals which try to change the core identity of the language also result in backwards-compatibility-induced bloat, although I would argue that's the smaller problem; shifting the identity in itself is the more serious one, because it basically makes the the language's existence pointless.

That is an excellent question, although I don't see how it's relevant to this discussion. Anyway, given how well current Rust is designed, it is highly unlikely that its core identity turns out to be just plain wrong within a time frame when the language, or at least its stability, still makes an impact. C is an excellent example of this phenomenon: it was the best way to do systems programming back in the day, but over time, its uncontrolled, unchecked, weakly-typed approach to accessing hardware and resources turned out to be unnecessary and too dangerous. 40-odd years after the birth of the language, we can now design better systems programming languages in the hope of replacing C, without the need of either growing C ad infinitum backwards-compatibly, or changing it and breaking people's old code. The C standards committee and community seem to have realized and embraced this; unfortunately, the C++ community doesn't, and now they have an identity problem.

That's quite a hyperbole. If you look outside the design-by-committee world, you will find plenty of counterexamples. Most functional languages' users love and admire the mathematic beauty and purity of their favorite language, usually not without a reason. Some major scripting languages, such as Python and Ruby, also have a user base agreeing that the language is good for the purposes it was designed for. Sure, there are many other languages that have a reputation of being a "pile of junk" (and rightfully so), but it's not a necessity. Why should we set our own standards so pessimistically, then? "It will turn out to be bad anyway, so we might as well not care at all (or let anyone actively mess it up)" is not constructive, because if we think about Rust like that, then what was the point in the first place?

If the Rust community manage to keep the language sane, reasonable, and faithful to its original fundamental values, then I think it can realistically have such high hopes. Otherwise, probably not.

With this claim, you are degrading anyone who is concerned about the growth in complexity of the language, to the level of someone clueless who just automatically contradicts everything with a knee-jerk reflex. Please consider that most of the so-called "don't be like C++" arguments have solid and extensive technical reasoning behind them, and highlight important technical as well as social/cultural issues.

Again, this ignores the fact that people do regularly give detailed, professional arguments as to why they consider a specific feature or change to be a bad idea. So the claim that the entire opposition can be summed up as "pointing to C++" is an oversimplification at best, a strawman at worst, and in either case is simply untrue.

8 Likes

I also don’t believe that BFDL improves things.

I don't think BDFL is better. It makes clear where decisions are made and it provides a clear venue for saying no for esthetic / language coherence reasons, but doing so by elevating a single person's preferences above all others obviously has downsides. My point was that languages without a BDFL might have to provide other means for this, such as a trusted committee that takes this role.

I don’t find @withoutboats comment derailing,

I didn't say that at all? I thought it was a very insightful post! In fact I was just trying to build on that to try and provide a more focused diagnosis of the problem as it applies to this thread.

  1. The core problem appears to be process issues (language issues only come into the picture because there is disagreement about whether, when and how they should be considered).
  2. I think the main perceived process issue might be that there is a lack of visible language guardianship, and no clear venue to talk about this in a positive manner. If you want to keep a certain feature as-is, either out of desire for simplicity or to keep options open for a certain possible future direction, then you have to re-litigate this on every discussion about changes in this area.

FWIW I think the Rust model of deliberation is working pretty well overall especially because people are recognizing where things are not going well and making changes. See the call for blogs, the eRFC process etc. All of that seems incredibly healthy, and ahead of just about any other open source project I know. The issue I'm describing might just be an unavoidable consequence of organizational growing pains, and might not be fixable without throwing away the baby with the bathwater. But I still think it's worth thinking about.

2 Likes

This is one of the reasons I keep bringing up "implicit await" and "implicit ?"-like designs- they enable this sort of polymorphism without requiring a new "await-if-this-is-a-future" or "?-if-this-is-a-Result" operator. For instance, here, or the "Motivation > Future-proofing" section here.

Not sure if "hide the effect unwrapper when in the effect context" is the best way to achieve this sort of polymorphism, but it's definitely one of the less messy options I know of.

Just to clarify: Rust's evolution is ultimately governed by a set of formal teams covering various areas of development. The Language Design team ultimately makes decisions on language-related RFCs, and works closely together (and in consultation with the broader community) to ensure an overall coherent vision of where to take the language.

7 Likes

As promised, I’ve created a separate topic for the bloatness prevention meta-discussion. May I suggest we move further comments related to that to Fortifying the process against feature bloat?

1 Like

I've wondered before whether there's a fundamental difference between "carried" effects like async/try and "erasable" ones like const/unsafe. I can totally imagine how polymorphism for the latter might work, but I have no idea how to abstract away differences like for_each vs try_for_each...

Still, not stating if even the general direction would be a good idea in any way.

But an idea how it maybe could be implemented with current or planned Rust features. Here I’d specifically rely on specialization. I don’t know in which exact state it is now or if all the things here will be allowed and I never used the syntax, so it’ll be probably wrong according to the current proposal, but it’s about the idea.

  1. Let’s create a trait that defines how a type acts in regards to carried effect, or if it carries anything at all.
trait Propagate {
  // Hmm, can I have generic associated type?
  type OuterType<T>;
  fn should_bail(&self) -> bool;
  fn to_bail<T>(self) -> Self::OuterType<T>;
  fn from_good<U>(u: U) -> Self::OuterType<T>;
}
  1. Define it for all „normal“ types
default impl<T> Propagate for T {
  type OuterType<U> = U;
  fn should_bail(&self) -> bool { false }
  fn to_bail<U>(self) -> U { unreachable!() }
  fn from_good<U>(u: U) ->U { u }
}
  1. Implement for Result (for simplicity just for that one for now)
impl<T, E> Propagate for Result<T, E> {
  type OuterType<U> = Result<U, E>;
  fn should_bail(&self) -> bool  { self.is_err() }
  fn to_bail<U>(self) -> Result<U, E> { Err(self.unwrap_err()) }
  fn from_good<U>(u: U) -> Result<U, E> { Ok(u) }
}
  1. Use it:
fn do_stuff<R: Propagate, F: FnOnce() -> R>(f: F) -> R::OuterType<usize> {
  let res = f();
  if res.should_bail() {
    return res.to_bail();
  }
  Propagate::from_good(42)
}

I know, ugly, unergonomic, maybe something is not allowed or implemented right now… but my point here is, there a chance one could implement such a thing by just the type system?

I have to say that I’m also not a terribly huge fan of the proposal. There is a proliferation of (proposed) keywords that all essentially do the same thing: return, ?, break, yield, await, fail, pass, expression in final position and, arguably, panic!. They all pass a value from one function, closure, co-routine or block to another.

They differ only by describing which “pathway” to take upon yielding control. Depending on the context there are several such pathways:

(try) fn         result or error
generator fn     yield value, yield error*, final value or final error*
async fn         not ready, value or error
stream fn        not ready, yield value, yield error*, final value or final error*

(* Incidentally, in generator-like closures which one will ? result in - yield error or final error?)

Now, there is a recent proposal to add Return-like error handling to C++. Since this is used for return values only, the discriminant doesn’e need to be explicitly stored, but can be passed along using a register or CPU flag.

Going further, the function could even return to different places in the caller or co-routine, depending on which pathway is chosen (this would add some call overhead, but simplify result checking). Essentially, this would lead to generic state-machine handling that could be used for all of the above cases.

Even though I acknowledge that this is not feasible immediately, I would be happier if there was a visible path towards a unification like this, rather than just adding one keyword after another for what are really similar use-cases.

3 Likes

Note that that's just an ABI question. Thanks to changes like Use ScalarPair for tagged enums by nox · Pull Request #49420 · rust-lang/rust · GitHub, Rust can also do that! As a simple example,

pub fn foo() -> Option<usize> {
    Some(12345)
}

is just

playground::foo:
	mov	eax, 1
	mov	edx, 12345
	ret

passing the discriminant -- and payload -- along in a register, not writing it to memory.

(And it's a general optimization, not something Option-specific.)

2 Likes

This can be rather straightforwardly implemented as a macro. This example doesn’t support edge cases like closures, recursive usages of return and macros that return, but those edge cases could be implemented, I think (other than I guess returning macros).

macro_rules! __try {
    ([@process_tokens] $c:tt [$($retrieved:tt)*] return $ex:expr ; $($rest:tt)*) => {
        __try!([@process_tokens] $c [$($retrieved)* return Ok($ex) ;] $($rest)*);
    };
    ([@process_tokens] $c:tt [$($retrieved:tt)*] return $ex:expr) => {
        __try!([@process_tokens] $c [$($retrieved)* return Ok($ex)]);
    };
    ([@process_tokens] $c:tt [$($retrieved:tt)*] throw $ex:expr ; $($rest:tt)*) => {
        __try!([@process_tokens] $c [$($retrieved)* return Err($ex) ;] $($rest)*);
    };
    ([@process_tokens] $c:tt [$($retrieved:tt)*] throw $ex:expr) => {
        __try!([@process_tokens] $c [$($retrieved)* return Err($ex)]);
    };
    ([@process_tokens] $c:tt $retrieved:tt ( $($in_parens:tt)* ) $($rest:tt)*) => {
        __try!([@process_tokens] [@() $c $retrieved $($rest)*] [] $($in_parens)*);
    };
    ([@process_tokens] $c:tt $retrieved:tt { $($in_parens:tt)* } $($rest:tt)*) => {
        __try!([@process_tokens] [@{} $c $retrieved $($rest)*] [] $($in_parens)*);
    };
    ([@process_tokens] $c:tt $retrieved:tt [ $($in_parens:tt)* ] $($rest:tt)*) => {
        __try!([@process_tokens] [@[] $c $retrieved $($rest)*] [] $($in_parens)*);
    };
    ([@process_tokens] $c:tt [$($retrieved:tt)*] $token:tt $($rest:tt)*) => {
        __try!([@process_tokens] $c [$($retrieved)* $token] $($rest)*);
    };
    ([@process_tokens] $c:tt $retrieved:tt) => {
        __try!($c $retrieved);
    };
    ([@() $c:tt [$($retrieved:tt)*] $($rest:tt)*] [$($result:tt)*]) => {
        __try!([@process_tokens] $c [$($retrieved)* ($($result)*)] $($rest)*);
    };
    ([@{} $c:tt [$($retrieved:tt)*] $($rest:tt)*] [$($result:tt)*]) => {
        __try!([@process_tokens] $c [$($retrieved)* {$($result)*}] $($rest)*);
    };
    ([@[] $c:tt [$($retrieved:tt)*] $($rest:tt)*] [$($result:tt)*]) => {
        __try!([@process_tokens] $c [$($retrieved)* [$($result)*]] $($rest)*);
    };
    ([@find_brace] $found:tt { $($inner:tt)* }) => {
        __try!([@process_tokens] [@finish $found] [] $($inner)*);
    };
    ([@find_brace] [$($found:tt)*] $t:tt $($rest:tt)*) => {
        __try!([@find_brace] [$($found)* $t] $($rest)*);
    };
    ([@finish [$($found:tt)*]] [$($code:tt)*]) => {
        $($found)* {
            Ok({
                $($code)*
            })
        }
    };
}
macro_rules! try_fn {
    ($($tokens:tt)*) => {
        __try!([@find_brace] [] $($tokens)*);
    };
}

fn early_return() -> bool {
    false
}

fn is_error() -> bool {
    false
}

#[derive(Debug)]
struct MyError;

fn check_another_error() -> Result<(), MyError> {
    Ok(())
}

try_fn! {
    fn foo() -> Result<u64, MyError> {
        // returns Ok(10), i.e. happy path is Ok autowrapped
        if early_return() { return 10; }
        // returns `Err(MyError)`
        if is_error() { throw MyError; }
        // `?` works as well
        check_another_error()?;
        1 
    }
}

fn main() {
    println!("{:?}", foo());
}
2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.