Method-cascading and pipe-forward operators proposal

In this topic I want to draw attention, show purpose and share initial idea about syntax for them.

Before reading, note: I’m just an enthusiast - don’t take that too seriously and don’t be too critical if something wrong; prepare to bad English.

Jump to second version - text below is not actual


It consists of two parts because in my proposal both are related and second can be combined with first to cover additional use case.
Also, both have different syntax from one that may be found in other languages.


1. Method cascading

Let begin with examples of code in current Rust and highlight some parts that it can improve:

    let mut hmap = HashMap::new();
    hmap.insert("key1", val1);
    hmap.insert("key2", val2);
    consume(&hmap);
    ...
  • Some values might remain mutable without a purpose after initialization (e.g. hmap might be mutated after consume).
  • Macro or builder must be provided to reduce initialization boilerplate.
    let mut result = OpenOptions::new()
        .write(false)
        .read(true)
        .open("location")?
        .to_some_collection();
    result.sort();
    return result;
  • It’s impossible to distinguish between regular method call chain and fluent interface (no any hints that we call methods on the same type, e.g. open - where call chain begins)
  • Method chains and fluent interfaces are incompatible with functions that returns values other than self (like insert and sort).
  • Nothing says that the same struct is returned from fn like_this (&self) -> Self (we only know that struct with the same type is returned).
  • Mutations can be hidden in method chains (collection methods like sort, push, extend solves that with returning () instead of self).

Proposal

Allow to prepend ~ when calling method that takes self by reference.
That will discard its return value and return self instead.

    let hmap = HashMap::new()   // `hmap` don't declared as `mut` here.
        .~insert("key1", val1)  // `~` returns `hmap`.
        .~insert("key2", val2); // `.~` chain shows that we operate on `hmap`.
    consume(&hmap);             // `hmap` is immutable further.
    ...
    return OpenOptions::new()
        .~write(true)     // Setters don't needs to return `self`.
        .~read(true)      // `~` shows that mutation might occur here.
        .open("path/to")? // It's clear where method call chain begins.
        .to_some_collection()
        .~sort();         // No need to introduce additional binding.

Using ~ on methods that takes self by value will copy that value or produce compile-time error if Copy don’t implemented for its type.

Methods with #[must_use] should apply ? to check returned value before discarding it

    file.~read_to_string(&mut buffer)?; // Produces warning without `?`.

Upsides

  • Less boilerplate
  • Intention of code is cleaner
  • ~ perfectly fits on that place

Downsides

  • Additional language item.
  • Might be confused with ~ operator removed from language.
  • Returning self is a common practice - should it be considered as a bad practice?

2. Pipe-forward operator

Again, I begin with examples and things that might be improved

    let result = second(first());
    third(&result);
    return result;
  • Taken from #2049 - nested function calls must be written in reversed order.
  • Side effects (e.g. third function) requires temporary variable (Kotlin solves that with apply and also extension functions).
    let mut dir_path_buf = PathBuf::from("base");
    fs::create_dir_all(&dir_path_buf)?;
    dir_path_buf.push("filename");
    return File::open(dir_path_buf);
  • Too many things must be placed on the same level of indentation and that distracts from essential parts.
  • Code might become bloated with repeating words (e.g. dir_path_buf on all 4 lines).
  • Temporary bindings must be named. But naming is hard. And that’s sometimes just annoying routine.
    let content = {
        let mut buffer = String::new();
        file.read_to_string(&mut buffer)?;
        buffer
    };
  • One binding, scope, level of indentation and two lines of code are required to isolate temporary binding.
  • Even if such code is very clean and descriptive it might be shorter (even one-liner in some languages).

Proposal

Like method cascading it’s based on . extension.
Expression form is value.(function) that is the same as function(value).
And it might be combined with &, &mut, ~ and ?, the following table demonstrates all possibillities:

Expr Fn arg Expr result Expr in current Rust
val.(by_val) moved fn result by_val(val)
val.(&by_ref) borrowed fn result by_ref(&val)
val.(&mut by_mut_ref) mut borrowed fn result by_mut_ref(&mut val)
val.(~by_val) cloned val { by_val(val); val }
val.(~&by_ref) borrowed val { by_ref(&val); val }
val.(~&mut by_mut_ref) mut borrowed val { by_mut_ref(&mut val); val }
val.(try_by_val?) moved fn result { try_by_val(val)? }
val.(&try_by_ref?) borrowed fn result { try_by_ref(&mut val)? }
val.(&mut try_by_mut_ref?) mut borrowed fn result { try_by_mut_ref(&mut val)? }
val.(~try_by_val?) cloned val { try_by_val(val)?; val }
val.(~&try_by_ref?) borrowed val { try_by_ref(&val)?; val }
val.(~&mut try_by_mut_ref?) mut borrowed val { try_by_mut_ref(&mut val)? val }

Now let’s see rewritten examples

    return first() // No need in additional bindings.
        .(second)  // Functions written in natural order.
        .(~third)  // Copies result of `second` in `third` and returns it.
    return PathBuf::from("base")  // No need in additional bindings.
        .(~&fs::create_dir_all?)  // PathBuf is borrowed to fs::create_dir_all.
        .~push("filename")        // PathBuf is mutated here.
        .(File::open)             // PathBuf is moved to `File::open` function
    let content = String::new()        // No need in additional bindings.
        .(~&mut file.read_to_string?); // Mutably borrowed to `file.read_to_string`

I don’t expect that to work with inline closures because file.read_to_string usage on previous example will conflict. Also error checking with ? might become a mess.
And IMO it will be ugly, so better to extract closure into variable and then use it:

    // BAD example
    value
        .(&mut move |x| function1(x, moved, 42))
        .(~|x| { 
            function2(x); 
            function3() 
        }?);
    // GOOD example
    let do_func1 = move |x| function1(x, moved, 42);
    let do_func2_3 = |x| { 
        func2(x); 
        func3() 
    };
    value
        .(&mut do_func1)
        .(~do_func2_3?)

Upsides

  • Code is shorter and faster to read/write
  • Hard to abuse it and make code unredable (like with also/apply extensions in Kotlin)
  • API’s will have better ergonomics without programming effort
  • Rust will provide alternative to |> (pipe-forward operator) that works with borrowing system

Downsides

  • Additional language item
  • Code is more implicit and “compressed”
  • Harder to debug and modify code without temporary bindings
  • There will be two ways to achieve the same result
  • Syntax is complex enough

Feedback needed:

  • Does any of it have parser ambiguities? Any technical issues?
  • Can someone provide examples when it will be confusing rather than readable?
  • Suggest alternative syntax? Symbol other than ‘~’?
  • Suggest alternative naming (e.g. “apply method” and “apply external method”)?
  • Does it introduce any runtime/compiletime performance impact/imporvement?
  • Should it be split in two separate features?
  • Can somebody help with writing RFC?
2 Likes

I really like this, and I when looking at alternatives to the ~ syntax I cannot think of any. This should definitely be two features so as to focus the discussion of each.

I, however, don’t think inline closures are bad.

As for performance and guarantees, from the little experience I have I think that since the resulting object would be immutable I think that this is nice for those sorts of usability things. I find myself often having to make something mut so for setup but not wanting it mut after setup, and this allows for that without having to introduce a temp.

1 Like

In PHP and JS this is known as a pipeline operator |>

I can definitely feel the motivation behind this RFC. I hate temporary bindings with a passion, and have longed for solutions to these silly problems introduced by methods like <[T]>::sort and Read::read_to_string. Here's my thoughts:


I'm impressed by the effort to take differences in ownership into account, which is a very real problem in rust that I feel is often glossed over in many attempts to bring ideas from functional programming into Rust. Unfortunately, I also feel it gives a pretty good impression of how difficult the design space is!

Your initial proposal for the pipe-forward operator already needs a little DSL to handle ownership and Results. Even just to support those features, I already find the syntax fairly intimidating, and I'm not sure that there is a good way to solve it.


For some "status quo" alternatives, something similar to this can already be implemented in user-code using macros. (with some tweaks to the syntax to make it easier to parse with macro_rules!) Also, a fairly trivial "pipe" method can give a lot of power for a comparatively small price:

pub trait Thru: Sized {
    fn thru<B>(self, f: impl FnOnce(Self) -> B) -> B;
}

impl<T> Thru for T { ... }

fn foo() -> io::Result<File> {
    PathBuf::from("base")
        // (yikes, I'll admit this Result-handling is pretty hideous)
        .thru(|path| { create_dir_all(&path)?; Ok(path) })?
        .thru(|mut path| { path.push("filename"); path })
        .thru(File::open)
}

Oddly enough, even though I applauded the RFC for taking ownership into account, thru seems to be capable enough of handling these problems despite just being the |> operator (though I will admit the closures can be pretty verbose).

(I do believe thru would have trouble however with functions that return borrowed data from self; however, I don't think such functions fall under the use case for this)


Applied ? will check returned from method value before discarding it.

This got a brief mention but I think it is worth providing an example. Also, I wonder what happens for other #[must_use] functions that don't return Results. (should .~method() simply warn on #[must_use] methods?)


let content = {
    let mut buffer = String::new();
    file.read_to_string(&mut buffer)?;
    buffer
};

aside: upon seeing this, my precise thoughts were: ohthankgoodness, I finally know I'm not the only person who writes code this way

5 Likes

Thank you all for replying and appreciating my work. That really motivates me to move further!

@Nokel81 I will start two separate RFCs where discussions will be more focused; this thread will be more about hacks around ~ and how they play together.

About inline closures (@ExpHP might be interested too): this is a way how Kotlin does pipe-forwarding.
I like it. However currently that might not be a best way to do things in Rust:

  • In Kotlin you can wrap everything in try-catch, add Throws annotation or just ignore error which you know never happens - Rust domain prohibits that, you must handle errors
  • Kotlin allows explicit this and don't requires you to declare closure argument (it will be named it by default which is great if you uses it moderately) - Rust prevents you to unintentionally crack someone's head with superpowers and requires to write everything explicitly
  • Lurking through backtrace of panic that occurred in deeply stacked closures might be not fun
  • You can write more complex code that it should be (sometimes I catch myself on that)

After all that listed I started thinking that the following code is more idiomatic than closures:

    return {
        let mut p = PathBuf::from("base");
        fs::create_dir_all(&p)?;
        p.push("filename");
        File::open(p);
    }

But closures might be a way to go after some ergomomic improvement.

And interesting idea arrived in my head, that even might be proposed alongside with previous two!
It:

  • allows closure to return its argument and ignore its expression result.
  • also uses ~ and has the same borrowing semantics.
  • works with Try trait to infer when argument should be returned wrapped in Ok( ):
  • can improve iterators and futures a bit

I've taken @ExpHP idea of Thru trait and here is example:

    return PathBuf::from("base")
        .thru(|~p| create_dir_all(&p))?    // `Ok(p)` is returned here.
        .thru(|~mut p| p.push("filename")) // `p` is returned here.
        .thru(File::open)

Don't looks that bad. But still has serious problem that I want to avoid:

You can write more complex code that it should be

It's like another dimension is added to text, and that makes it harder to reason about where to move. Nothing will prevent you to write more and express less (e.g. by nesting thru instead of flattening them, putting additional logic in closures where it shouldn't be, add complex error handling etc.). I think that this problem should be prevented and preventing problems by using effective design that dont introduces them is where Rust really shines.
That motivates me to defer solution and return to improving initial proposal:

    return first()
        .(second) // Methods that takes by value are unchanged.
        .~(third) // `~` is moved outside.
    return PathBuf::from("base")
        .~(fs::create_dir_all(&).unwrap()) // Allows to split flow!
        .~push("filename")                 // `.~` chain is now consistent.
        .(File::open)
    let content = String::new()
        .~(file.read_to_string(&mut)?);

Slightly better, however part that allows to split flow is very controversial even for me.


Need to mention that pipeline.rs crate exists.
It uses macros and might be interesting alternative, but I've don't tried it yet.


About performance what most interested me is that isn't such DSL more lightweight than closures / macros?
Is there any impact of Builders, especially when all Self-type-returning-checking logic runs?
Some time ago I've tried to measure that without success (probably because code was optimized) and more advanced knowledge was needed to do that properly.


@ExpHP, the following example shows what happens with method-cascading, #[must_use] and ?:

    file.~read_to_string(&mut buffer)?; // Produces warning without `?`

If type is other than Result, I think it shoudn't be discarded, and there will be no other way to flush warning than use return value.

The ideas are interesting, but I find the syntax hard to read.

 return PathBuf::from("base")
        .(~&fs::create_dir_all?) 
        .~push("filename")       
        .(File::open)

The value.(function) syntax blends in with function(value), I can’t visually distinguish between the two very well. I would prefer something more straightforward and less verbose, e.g. |>.

And I don’t like the ~ sigil too much. While there is definitely a real need in some way to apply imperative mutations inside call chains, I’m not sure this arbitrary sigil is that way. I really like Kotlin’s way of doing it, maybe that can be somehow applied (pun intended) in Rust?

1 Like

I strongly oppose adding another kind of function call to the core language. I appreciate the effort you put in this proposal, but I don't feel it would make things any clearer or any better. Here's why:

  1. The builder "pattern" (I'm not even sure it's a big enough thing to be called a pattern, it's basically just method chaining allowed by returning self) is perfectly sufficient (and language-agnostically familiar to most programmers) to implement the creation of config objects and the like. However, in Rust we can do better even without having to implement the builder pattern: we have the Default trait and "rest" syntax in structs. So, a better implementation of a config object with few customized and many default fields is, for example:

    OpenOptions { write: false, read: true, ..Default::default() }
    

(Too bad that the actual std::fs::OpenOptions type wasn't designed with this in mind, it would have been much more ergonomic.)

  1. It’s impossible to distinguish between regular method call chain and fluent interface

I strongly feel like "fluent interface" is a grandiloquent buzzword which is being attributed more merit than it actually has. Granted, it's a nice pattern if you are that kind of programmer. I don't want it baked right into the core language, however. (Personally, I don't even like it. I prefer that my code does what it looks like it's doing. I don't want to read code like English prose.)

no any hints that we call methods on the same type

Well, there's the type system, there's the documentation of the method, there's your IDE… I don't think even more special syntax would significantly help with that.

Method chains and fluent interfaces are incompatible with functions that returns values other than self (like insert and sort).

And for a good reason. It seems to me that you want a functional-style data processing pipeline of some sort. And that absolutely makes sense. However, it's only in good style if it actually works like a pipeline: each method returns a transformed, immutable collection, on which the next similar method is called.

If the methods actually mutate the collection, then I absolutely don't want to trick myself into the illusion of using a pipeline of immutable data. If there's mutation going on, I really-really want to see my collection.insert()s and collection.sort()s written out in a way that sticks out like a sore thumb, screaming "HERE BE DRAGONS" to the reader. A ~ sigil doesn't really help with that.

You also assert that

Intention of code is cleaner

which I disagree with. If I see a function call with some weird symbol before it, I'll have no idea what it does. It looks roughly like a function call. Is it a function call? If it's not, what it is? If it is, why doesn't it look like any other function call? If it's doing something magical apart from calling a function, why isn't that magic very explicit, very clear to the reader? Sorry, but I don't buy the clarity argument at all. For me, it would actually make the code worse and harder to read.

--

Method piping would be nice in principle. In practice, though, I find that just making temporary let bindings works perfectly well. I don't understand why that's a problem. Actually, I find breaking down a very complex expression into distinct subexpressions easier to understand. If there's a well-defined boundary of a sub-task, the steps of the operation the code is doing becomes clearer.

The fact that you had to write out 12 lines in that table should also be a red flag. There's apparently a combinatorial explosion of syntactic forms and/or corresponding semantics depending on both argument and return value types (value/ref/ref mut). I, for one, don't want to memorize 12 different forms of function calls just in order to be able to recognize a glorified { foo.push(42); foo.sort(); foo }. Again, this sort of syntactic blow-up doesn't make anything any clearer. Conversely, it adds so much noise and so much complexity. I would definitely hate to read such clever code, and I would make many mistakes when attempting to produce it.

In the Haskell world, many programmers use fancy-looking operators for several things. Even experienced Haskell programmers complain about that regularly and argue that sigil proliferation is a major readability problem in Haskell. I specifically would not welcome such a change in Rust. I never thought of block expressions and variable bindings as a hassle — when they are necessary, we should just use them and get over it.

7 Likes

If a programmer wants to write shitty code that no one will be able to understand, then they'll find a way to do it no matter how many features are added to the language to prevent them from doing it. Ending the method chain when you're done with the initialization of an object is a logical step when an interface was designed with that in mind. Of course, there are interfaces that aren't designed like that for one reason or another and your proposal has merit.

Delphi had a nice with keyword that was used to do something similar to what you want. The code looked something like this:

with Object do begin:
   doOneThing();
   doAnotherThing();
   doThirdThing();
end;

In my opinion this syntax is easier to comprehend than the one you proposed.

This sort of syntax is extremely problematic and universally recognized as a "Bad Thing(TM)". Why? Because of things like this:

let x = cornflucky();
ley y = bambuckled();

with y {
     let z = schlameel();
     let w = shlamazael();
     let v = x.wamalam();
}

Question: Is Schlameel() a method on y or a function in-scope? What about schlamazel()? Let's say the former is currently a function in-scope and schlamazel() is a method on y. Now, let's say that whatever y is that is returned by bambuckled() has an x field? uh-oh? Is that was what we thought would happen? What if it didn't originally have an x field, but, and x field was added to whatever y is? So, now, a change to the definition of y completely (and potentially silently) changes what this code does. Yikes!

Every language that has this sort of "with ...." syntax regrets it.

2 Likes

It could be limited to only calling methods on an object and discarding the returned value as the OP proposed.

Being a Haskell programmer myself, while there criticisms against overuse of custom operators (particularly due to the lens package), I have never heard one fellow Haskell programmer say they wish the language didn't have custom operators.

Furthermore, there's a certain logic to the choice of sigils when it comes to custom operators. Given $ (low fixity function application), then <$> is function application lifted into a box (functor mapping), <*> is multiplication (function application) of a function in a box to a term in a box ((<*>) :: Applicative f => f (a -> b) -> (f a -> f b)).

And so it continues.. The choice of sigils for a custom operator in Haskell is intuitive, and most Haskell programmers know all custom operators in the base package. Even in the lens package, the sigils chosen for custom operators follow a pattern.

Fundamentally, custom operators give users the ability to encode better EDSLs (Haskell is commonly known as one of the best languages to encode EDSLs in) and use symbols common for their problem.

Variable bindings force the user to come up with variable names. I think that is a big problem. As a fan of point free notation, I don't particularly like being forced to come up with names for temporaries.

3 Likes

Indeed, a certain one. I've been programming in Haskell for… well, approximately 4 years for now. I have memorized most of the operators you gave as examples, and I still wish they had descriptive function names instead. So now you heard a fellow Haskell programmer complaining about this issue. (Point in case, the choice of sigils is not exactly "obvious"; what's obviously function-application-y in $, for example? Of course, if you have been using it forever, you feel that <$> has something to do with function application, but I'd rather not base my safety-critical code upon gut feelings.)

In Rust, you can use macros to encode even better DSLs very locally, in a very limited scope, without needing any support in the core language. This is a strictly superior solution compared to allowing arbitrary operators, or adding more and more hard-coded ones.

That's kind of the point. If an expression is long enough to need breaking up into subexpressions, then you should probably not write it in point-free style in the first place. Variable names are indicators of the values they contain. Yes, it might induce an unusual feeling of reward to be able to cancel out argument names on both sides of a function, and for smaller, simpler expressions, it does make sense. It's one of the best features in Haskell. However, when one is trying to read a long, complicated expression, it's not exactly easier to comprehend it in point-free style.

The basis of this argument is basically the same as the requirement of typing fn items explicitly. If implemented correctly, Rust's type system could infer those types too, just like Haskell. It's just natural deduction all the way down, anyway. However, some explicitness comes with a level of clarity, and it's sometimes better than trying to be overly clever and eliminate 100% of the temporary variables and function arguments.

So while I do think you have a point, I came to the conclusion that

  1. Rust's safety- and correctness-oriented design fits "sometimes explicit and descriptive" better than "always implicit, terse, and magical"; and
  2. I (and many others) like it this way, so I'll have to disagree with you.
1 Like

Well complain yes; wanting to remove them from the language? That's different.

Nothing. But then again, there's no particular reason why addition is +. Operators sigils are chosen by convention and tradition. What makes one feel more natural than the other is merely a matter of how early on you were trained to use it and how long it has been around in history. To me, this feels like an extension of Stroustrup's rule.

Functor mapping and function composition is an immensely common operation, especially in Haskell -- I don't think the speculation that these operations are more common than subtraction or even addition is far fetched.

Safety critical code should, IMO, be based on rich and strong type systems including formal verification and dependent types. Not some choice of name. Hoogle also makes pretty trivial to find out what a custom operator does.

Over-locality can be a problem for EDSLs if all you do is live within that EDSL. Macros also mesh less well with the rest of the language compared to custom operators which are just functions. I also don't think macros are used that much for EDSLs in Rust. It seems to me they are more commonly used to reduce duplication.

The way I write Haskell, where and let blocks are essentially banned unless I want to force memoization. if a function becomes long, I will simply create a new top level function. This improves the ability to test things. To me, the abstraction boundary should be the module and not the function.

Ah but this is the point of pointfree style. It encourages very small functions and breaking things up. Of course, you should not overdo things. As we Swedes say "Lagom är bäst". If you find yourself using flip a lot, you are probably overusing point-free style.

In my experience, the point of global type inference (which I personally think would be a good addition to Rust if coupled with a REPL) is that it allows you to simply write the function and then let the compiler tell you the most general type with ghci> :t myfun. When I make use of global type inference, it is precisely so I can first define a function, and then immediately after put the type signature on it. As such, there are at most never more than a few functions that lack type signatures.

I don't agree that temporary variable bindings are more explicit. In my experience, the distaste and lack of imagination many people have (including me) leads to poorly named temporary bindings.

I too am a sucker for safety and correctness and I'd be the first to argue for full dependent types in Haskell or Rust. However, I simply don't agree that temporaries add to correctness (and as I explained above to explicitness).

Now, that is splitting hairs. Obviously what I meant was that Rust shouldn't have custom operators in my opinion. I wish Haskell didn't have either, but in all honesty, would it be fruitful to start a fight trying to remove them retroactively? I don't think that's a realistic goal, so I wouldn't bother now. However, Rust doesn't have them yet, so we can still reasonably argue for keeping them out of the language.

Sure, but unfortunately that doesn't help immediate, at-a-glance readability. If I have to go to the internet or GHCI every time I encounter an operator just to find about what it does, then readability is already a lost case. Being able to skim through the code fast, looking for a name, for example, is a very important ergonomic property of a language, although most programmers probably don't recognize this fact consciously.

But then you are precisely arguing for the explicit typing at function boundaries, aren't you? Also, how is a top-level function better than a top-level local variable in a function in this regard? I honestly don't see why it would be harder to come up with a good name for a local variable than it would be to invent a good function name.

On the rest, I'm afraid I just disagree with you.

Certainly you can.

That's why I think you should use custom operators in moderation and sparingly when it is a really central operation. Named functions can also be poorly named btw, so often you have to go check the definition of a function anyways in the documentation. At least that is my experience.

In any case, this thread is not aiming to add custom operators, but a limited set of new operators (I won't speak to the appropriateness of those particular operators...).

I am; but I do use global type inference as a development and prototyping tool. I think it is invaluable as such. But I would never permit a function without a type signature to be used in production.

The former is testable (you can define QuickCheck properties for them), the latter is not. Unfortunately, I've seen a lot of 200 LOC methods in Rust and I think those hurt readability and correctness.

When I write down a function and with its type signature, I also write documentation. This becomes much more of a unit to me than a let binding. I think especially in Haskell, there's a really bad tendency to use one letter or very short variable names like go. A temporary binding also means that the variable name must be repeated twice, once for the binding and once for the use -- this adds noise.

Not really, there are modern languages that have this feature and use it for Great Good™. E.g. Kotlin provides apply, run and with methods on all objects, which allow you to execute an arbitrary block with the given value as its receiver, e.g. this is valid Kotlin:

y.apply {
     val z = schlameel();
     val w = shlamazael();
     val v = x.wamalam();
}

While shooting yourself in the foot like above is possible and sometimes it does happen, I, as a Kotlin user, feel that the overall usefulness of this feature far outweighs the problems caused by occasional impact to readability. Without exaggeration, this feature is a serious productivity and elegance boost for me in Kotlin. I am, of course, using it very sparingly and in a disciplined fashion (I would never write code like above).

1 Like

“You can’t make me use the better option!!!”, as Mr Crockford said facetiously in another talk.

Point in case: yes, “with”, “apply”, etc. might be useful. To some extent. Certainly, it’s neither essential nor a major improvement, though. Furthermore, it is simultaneously dangerous.

Actually, it can allow one to create powerful DSLs, which is a major feature in Kotlin. You would almost be able to do stuff like this: https://kotlinlang.org/docs/reference/type-safe-builders.html (The difference being that you would still have to put ".apply" after each block opener.)

As to dangers, the usage can be restricted to a certain extent, e.g. in Kotlin such DSLs are defined using @DslMarker attributes that, in a nutshell, only allow to use class Foo's property bar only immediately inside foo {} blocks, not in nested blocks.

As I’ve already explained, EDSLs can be (and are often) implemented in Rust using macros instead, in a strictly more general way, without the need for custom operators.

Can you point me to an example of such EDSL?