So just to be clear, this is purely hypothetical. Actually doing this in a backwards compatible way would be difficult. So this question is ignoring all of the challenges that would be involved here. My question is in a perfect world where we have acceptable solutions to all of the problems around this, would people even allow it?
The use case here is for building interesting DSLs. Here’s a random example of some code using Diesel:
let downloads = version_downloads
.filter(date.gt(now - 90.days())
.and(version_id.eq(any(versions))
.or(something_else))
It seems from the outside like the reason Eq and Ord don’t allow overloading the return types is so that we can define ne in terms of eq, and gt, lt, gte and lte in terms of cmp, not because there was fundamental opposition to them returning types other than bool.
Again, I don’t want to focus on how we would go about implementing overloaded return types, but more just curious how people would feel about it if it were allowed at all.
Of course, an alternative is implementing this with a macro which rewrites the operators to a custom method, which has the advantage of making it more clear to the reader that something unusual is going on and the operators don’t have their normal meaning.
Can’t be done with macro_rules!, but Diesel in particular is already using a ton of procedural macros…
In my opinion in a perfect world this kind of thing would be written with plugins.
I think that it's a very good thing that any DSL or weird syntax is constrained within macros or plugins and that anything outside of them is regular Rust syntax.
Afraid. While it's cool to have stuff like this for SQL Alchemy port, it would make guessing what piece of code does hard than it is. I don't know a good reason outside DSL why == would return anything other than Eq/NotEq or equivalent.
Sorry, forgot to mention you'd need procedural macros.
Unfortunately plugins and macros make it difficult for tools (and people?) to understand rust code. For example, rustfmt cannot format code inside a macro. And a hypothetical refactoring tool will have a hard time renaming references in macros: while it should be hard, but possible, to find a reference in the expanded code, tracing it back to some token in the unexpanded macro is even more difficult.
Also, I wouldn’t like to debug a syntax error inside a huge macro invocation
Weird. Can't it treat macro code like text and just treat it like a fancy string?
Rest seems to me like an ok tradeoff. It's much worse to have code like e1 == e2 and not know that it doesn't have side effects, than to have tool be confused by el!(e1 == e2) and have the tooling be sometimes confused. Code is (so far) made to be human readable/understandable and only occasionally machine readable.
If you treat something like a string, or even as token trees, you can't reformat it in a meaningful way. And btw I was wrong that rustfmt can't reformat all macros: apparently, it has a heuristic, which allows it to reformat a function-like macro.
I'd take another position here: It's my fundamental belief that macros and plugins in programming point towards deficiencies in the core language. If people implement macros to redefine meaning of basic operators, I would argue that it makes sense to make the overloadable.
I do have to say that this is a Rubyists view on things, where a lot of things are possible without macros and plugins and I generally like that a lot: if I understand Ruby and its evaluation model, I can (with a certain amount of time) understand any evaluation, without having to fear someone redefining the syntax somewhere.
I tend to agree, however more the more powerful core language, the harder is to comprehend. So it’s kind of a tradeoff. It’s hard to tell where the line should be drawn.
For example if you allow arbitrary stuff like x ~> y to compile? Sure it’s useful, but what happens when someone abuses it for x ~~$@#> z?
In this instance (operator overloading) I agree with Rust authors. I do think, there are tradeoff in some other places are possibly wrong (e.g. no default types and no keyword args).
Well, the same happens if "Core Language + 20 Plugins" would become default. Compare to Haskell, where standard Haskell is rarely used, but every library uses some Pragmas.
If you allow rather arbitrary syntax extensions and macro trickeries, any attempt to control things like that is already given up, so I don't fully understand the question.
That would be bad, as it kills modularity when different modules have different ideas about the precedence/associativity they want (I've had the experience in Prolog).
But with a fixed precedence you can reasonably have arbitrary combinations of special characters as operators, for an example take a look at Scala.
Actually, no, I wouldn’t. I consider precedence syntax (which should not be controllable), but their behavior semantics (which I find easier to control).
I don't think any control is given up when such code is fenced in special delimiters like macros are. You can't write let z = x == y in regular code and have it return a string. let z = el!(x==y) tells you something funky is going on.
Or imagine getting following error:
if x==y { true }
^~~~ WhereStmt isn't bool
Ugh. This is the part I hate most about Scala/Haskell. It sounds great on paper, but its not actually usable unless you like the sort of code that looks like it was written by Elder Gods.
Fun question, these are real examples from a scala library.
~%+#
~%#+#>
Without googling it, can you guess what these operators do? Can you guess with googling? What library do they come from? If yes, how long did you work in Scala, because I sure as hell couldn't google it and google had my history of clicking it before.
Fencing code with macro! gives you a hint where the magic comes from, at the very least.