As much as I want named arguments this kind of idea will probably not make thing move forward.
I think that any proposal about named arguments (or anything that improve stuff related to arguments) should include:
a clear description of the problem we are trying to solve
a clear explanation of why the currently available technique are not a good solution (create multiple function with suffix, use traits to mimic overloading, strong typing, the builder pattern, …)
a clear explanation of the chosen solution (most proposition describe only this part)
a clear explanation of the limitation(s) of the chosen solution (is it explicit/implicit, does it need modification in the caller and/or the callee to benefit from this improvement, is it visible to the type system, is there a high chance of creating churn in the ecosystem, …). I unfortunately don’t thing that any solution could be a pure improvement without any downsides.
And I also think that all of those points should have a TL;DR. Given all the discussion I read about this subject, we both need a (very) detailed description, and a short-one to cover the need of everyone.
I confirm, a lot of work. I kinda burned out on this, then I moved and got a job so I may not come back to it in the near future, though I hope to find the time, I still wish to have named arguments in Rust in some form
Both of these are incredibly boilerplatey at both the definition site and the use site. So this is a good argument for named arguments. (Or improvements to boilerplate.)
If your argument was "nobody needs that stuff--it's technically possible but in practice it just doesn't come up", that'd be different. Then the feature would possibly be not pulling its weight.
Every time someone claims that Rust doesn't have overloading and it's bad for Rust, I do a (virtual) facepalm--Rust would be unusable without overloading (that it already has).
c.size_of_collection()f.size_of_file()
No thank you!
Heck, the entire magic of into() is based on overloading.
If someone wants to argue that (x, y).foo() is good overloading but foo(x, y) is bad overloading, well, sure, have at it. But let's not pretend it isn't there (at least isomorphically)!
This isn't overloading/polymorphism; this is just method lookup (which can be seen as related, since it's type-dispatched). These are still two different functions with two different names, Vec::size and File::size.
Notably, method lookup is done statically.
There is a meaningful difference, though.
fn foo(x: &i32);
fn foo(x: &str);
This is ad-hoc polymorphism. There is no connection between the two implementations. The potentially surprising and problematic bit of overloading in e.g. Java is that you could call foo(Object) but get foo(ArrayList) instead, based on the dynamic type of your object.
Is there actually a difference between the two in a language that doesn't enforce parametricity, though? (E.g. Rust allows violating parametricity via specialization and TypeId.) You could make a pretty solid argument for no (that virtual method call could do anything!) but the fact that which function is being called (even if that function just dispatches to the vtable ptr) is a useful property for program reasoning.
"Rust doesn't have named arguments" isn't the problem. Rust doesn't have lots of things.
So the best way to make this move forward might be to make progress on something that helps similar intent to named arguments, but without having such pervasive effects as exactly named arguments.
Your definition of boilerplate is very much different from mine or from the general consensus for that matter. Having strongly typed interfaces is good design, not redundant boilerplate. I see no redundancy in declaring a properly named struct to encapsulate multiple pieces of information that belong together, nor do I see a problem with making functions with long parameter lists be a little less convenient to encourage better design practices.
Everything in programming is a tradeoff and if you want say prototyping without mucking about with defining types than perhaps rust is not the best tool for you. There are plenty of scripting languages that choose the opposite set of tradeoffs.
Moreover, yes, having multiple ways to do the same thing as you suggest has much higher overall costs.
It's not just related, it's isomorphic. It's exactly the same operation!
Look:
// This is evil because "overloading"
fn print(x: i32) { unimplemented!(); }
fn print(x: String) { unimplemented!(); }
fn print(x: i32, y: String) { print(x); print(y) }
fn main() {
let i = 1i32;
let s = "salmon".to_string();
// Same method name, postfix arg(s)
print(i);
print(s);
print(i, s);
}
vs.
// This is brilliant because it's overloading BUT args appear on the LEFT!
// Or because it has tons of boilerplate that surely clarifies things?
struct NewI32(i32);
impl NewI32 {
fn print(self) { unimplemented!(); }
}
struct NewString(String);
impl NewString {
fn print(self) { unimplemented!(); }
}
struct NewBoth((i32, String));
impl NewBoth {
fn print(self) {
let NewBoth((i, s)) = self;
NewI32(i).print();
NewString(s).print();
}
}
fn main() {
let i = NewI32(1i32);
let s = NewString("salmon".to_string());
let is = NewBoth((2i32, "cod".to_string()));
// Same method names, prefix arg
i.print();
s.print();
is.print();
}
There's no ad-hoc polymorphism, no dynamic dispatch. It's all static dispatch both ways, except one way is a truckload of work to do on purpose (but happens inadvertently all the time as people create libraries and use the same names), and the other way is maligned because...doing it on purpose is worse than doing it by accident?
Java (and C++ and Scala and...) also dispatches statically to overloaded methods based on the static type of the arguments. (Clojure's multimethods and Julia's multiple dispatch do pick based on dynamic type. That is ad-hoc polymorphism. Plus lots of languages have ordinary polymorphism on the first argument of the function--but again this is a different kettle of fish.)
So, anyway, Rust totally has overloading (but only for the prefix argument).
I'm not sure why you don't see redundancy or boilerplate in something that is literally the same thing twice and/or literally the same thing only longer and with an extra superfluous name (for many use cases).
But, anyway, sure, if you don't find it redundant or boilerplatey, that's great for you.
However, if I'm not going to question that you don't, you can't question that I do.
And regarding general consensus--well, do you have a link to a poll or a discussion involving a sufficiently broad group of people to be somewhat representative?
fn foo(pub x: u32, pub y: u32, pub z: u32);
foo(
x = 123,
y = 456,
z = 789,
);
and in fact, there is merely an overhead of 9 + 2n tokens[1] at the definition site and a constant overhead of just 3 tokens at the call site. And that's with the most generous named argument syntax.
A more reasonably addable syntax (and one I do think is a good idea) is to treat this as sugar for the full struct, and answers all the gnarly questions about names in function types as "it's a single argument of an unnameable type."
macro_rules! named_args {
{
// TODO: fudge some mostly correct generics support
$vis:vis fn $fn:ident (
$($unnamed:ident: $UnnamedTy:ty,)*
// FIXME: use pub introducer; for some reason pub matches as $:ident
$(@ $named:ident: $NamedTy:ty),+ $(,)?
) $(-> $RetTy:ty)? { $($body:tt)* }
} => { ::paste::paste! {
#[allow(non_camel_case_types)]
$vis struct $fn {
$(pub $named: $NamedTy,)*
}
$vis fn $fn(
$($unnamed: $UnnamedTy,)*
$fn { $($named),* }: $fn
) $(-> $RetTy)? { $($body)* }
#[macro_export] // FIXME: use a real pub_macro feature polyfill
macro_rules! [<__$fn>] {
(
$($$$unnamed:expr,)*
$($named: $$$named:expr),* $$(,)?
) => (
$fn(
$($$$unnamed,)*
$fn { $($named: $$$named),* }
)
);
}
$vis use [<__$fn>] as $fn;
}};
}
mod example {
named_args! {
pub fn foo(x: i32, @ y: i32, @ z: i32) {
dbg!(x, y, z);
}
}
}
// FIXME: cannot call macro by path, must use; fixed with $self
use example::foo;
fn main() {
foo!(123, y: 456, z: 789);
}
I don't think that's how burden of proof works.
It's not superfluous as soon as anyone stores/encapsulates an instance of it.
If you want to be clever, just name it the same as the function.
That I didn't remember this was the case isn't a good sign though, tbh... I vaguely remember this being listed as a Java gotcha (that it didn't do dynamic type dispatch). Though, how does it interact with generics/templates/generics/etc? (C++ I know calling an overloaded function in a template will call the instantiated type; I think Java chooses the overload with the used generic base class, and this is where the gotcha is that it isn't dispatched based on instantiated type.) With Rust you don't have access to inherent methods on generic types, so you don't have to define what happens in this case.
There is still a difference, though, in that the syntax does provide a meaningful communication channel; Rust still doesn't have function overloading on the type of the first argument (e.g. print(i)), only on the method receiver (e.g. i.print()). (And due to this difference I still hold that method lookup is meaningfully different from overloading.)
The method receiver is set apart and also subject to a bunch of other rules, such as autoref. The method receiver is set apart because it is treated specially.
(They're also subtly different at a compiler level; method resolution is done by resolving the type of the reciever and then doing method lookup on the name; overloading first resolves the name to the overload set and then selects the appropriate overload based on the argument types. The order does matter; e.g. as a function argument the argument recieves type constraints on an inference variable, whereas the type must be completely known before method resolution.)
Additionally, the really gnarly cases of type-based method lookup come when resolving overloads based on multiple arguments rather than just the type of the one (method reciver). (Especially when types don't have to match precisely, because some overloads take a polymorphic type.)
Rust also has strong type inference that makes the types of bindings not necessarily known immediately. It's this multi-type-variable function resolution which gets expensive fast.
counting: struct, foo_Args, {, (function args,) }; foo_Args, {, (function args,), }, :, foo_Args. The 2n factor is for the commas in the struct def and the field names in the argument destructure, with the latter being the only thing you could really call redundant. ↩︎
Check out the latest rust survey where one of the top concerns ranked by users is the complexity of Rust, an already quite a large language. Dumping a shedload of complexity to duplicate existing functionality with a different syntax is therefore a non starter.
As I said, I do not find declaring additional types to be "boilerplate" - that's a core aspect of a statically typed language. I have said above, that existing patterns can be improved upon by filling obvious gaps in the existing syntax - for example, adding inference for struct literals' names. That has negligible complexity costs whereas having multiple ways of doing the same thing is very costly on learnability, maintenance and ability to reason about code.
Lastly, let's touch on the equivalence fallacy:
Where I am arguing against adding redundant complexity to Rust, the so called negatively affected users are the subset of users who advocate for named arguments only.
Where you argue for adding names arguments, the affected users will be all rust users. Even if I don't want this complexity in my code, the reality is I would still need to deal with it if it exists in the language. Training would still need to account for this. Using of external APIs would still be affected by other people's choices. The other option is fragmentation of the ecosystem into different language subsets.
At least on Nightly you can implement overloading by implementing Fn/Mut/Once:
#![feature(unboxed_closures)]
#![feature(fn_traits)]
#[derive(Clone, Copy)]
struct S;
impl FnOnce<(u8,)> for S {
type Output = u8;
extern "rust-call" fn call_once(self, args: (u8,)) -> Self::Output {
args.0
}
}
impl FnOnce<(&'static str,)> for S {
type Output = &'static str;
extern "rust-call" fn call_once(self, args: (&'static str,)) -> Self::Output {
args.0
}
}
fn main() {
let s = S;
println!("{}", s(8u8)); // prints "8"
println!("{}", s(99)); // ok if there is no ambiguity, prints "99"
println!("{}", s("Hiya!")); // prints "Hiya!"
}
You can also effectively have function overloading based on the return type, e.g., with Into::into.
Yes, this is effective overloading. But current track is not to ever stabilize this ability.
This falls under trait-dispatched parametric polymorphism.
Ultimately: I agree that the difference between type-driven adhoc overload sets and the current method lookup and trait dispatch is slight, but what I'm really arguing is that there is a well-defined difference.
The difference is pretty much exclusively in that overload sets are adhoc. Trait lookup is parametric, and method resolution is constrained. (E.g. a difference is that due to how Java namespacing works, all items in an overload set must be declared in the same file. Rust's module system would allow constructing an adhoc overload set by importing the name from multiple files/modules, as well as parts of the overload set having different visibility, etc. There are real differences between adhoc overload sets and method lookup.)
I would like to explicitly disclaim any opinion, expressed or implied, on whether this side of the line is "better."
Yes, that is a fair point. I personally find that named arguments, even though they increase formal complexity, actually reduce perceived complexity because code using them often places reduced cognitive demands on the reader (and writer). It feels less complex. So I don't fully buy the complexity argument. But you're absolutely right that it does affect everyone. There's no getting around it. And there's a good argument to make that not complexity per se but "I've got this but just barely--don't make me change anything!" is a very valid reason to not make any changes unless they really pay for themselves.
So, yes, you're right: the two arguments are not equivalent.
I very much hope we do stabilize it. To the best of my knowledge, the main blocker has been the "rust-call" calling convention, which we could either stabilize, or replace with type-level variadics.
Okay, but this is like saying that we have different rules for chick peas and garbanzo beans. Nothing prevents adoption of exactly the same rules for static dispatch for cases that are distinguished only by a trivial syntactic rewriting. Every foo(x, y) is equivalent to a x.foo(y) and (x,y).foo().
It's all just dispatch to overloaded names. It's totally reasonable for type inference to step down a notch when you use overloaded names. (Would be good to have convenient syntax for type ascription, though.)
I agree that there can be all kinds of gnarly problems if you want to solve the most generic case possible. Same deal if your "method receiver" (chick pea) has complex types that would only be disambiguated by the method called. Maybe method receiver position is a good way to signal different expectations about type inference as compared to first argument (garbanzo bean). But what we shouldn't maintain is that Rust has no overloading. We might argue that it already has exactly the right amount (I'm skeptical, but hey, it's a coherent position), but not that it doesn't have it because we chose to call the exact same thing by a different name.
(There are gotchas with Java regarding generics, if you forget that overloading is not ad-hoc polymorphism, so in the generic context you will statically dispatch to whichever overloaded method is most specific for the root class in the hierarchy of all allowed generics. In contrast, the multimethod approach (or dynamic dispatch) that Julia uses will use the actual type.)
(Cool macro by the way! I might actually something like that if it were idiomatic. As it is, I think it'd really raise the difficulty of someone else who needed to understand my code, e.g. me in a few months/years.)
Sure, an evolution in this direction could fulfill the need for named arguments adequately. Not sure this is quite enough, but it's a lot better than the existing case. I'd have to use it for a while to know whether it scratched enough of the itch.
I'm all for minimal evolution of existing features to meet the need rather than importing wholesale the implementations chosen by other languages.
It's not the same, because you have to decide when you're resolving those names. See Two-phase Lookup, Koenig Lookup in C++ for the kinds of horrible things that end up happening with ad-hoc overloading.
It's entirely possible to make the same decision in both cases.
You just can't win an argument that two situations isomorphic up to a trivial syntactic rearrangement have any huge showstoppers in one case vs. the other.
You might be able to argue that the different syntax sufficiently strongly suggests different expectations that it is unwise to use the same rules for both. But you can't argue that it's necessarily fundamentally different. It's isomorphic!
Painting people who disagree with you as stupid doesn't earn your argument any points, not does it help to disregard valid criticisms since you yourself can't see them personally.
While in your own personal project you could dismiss this as inconsequential, in a large code base worked on by multiple people over time even the smallest duplication of features becomes a sink in productivity over time and a source of complexity and pain. If we want rust to become the language for the next 50 years we need to cater for such code bases! Google has over a billion LOC. What kind of code style would you reckon they prefer? They specifically disallowed a large chunk of advanced features of C++ which "reduce boilerplate" because they preferred to have (more) code that is easier to reason about and that allows to onboard new engineers faster.
Every feature added to Rust must satisfy the condition that it really pays for itself. That ought to be an obvious fundamental requirement for that same objective.
Edit:
As an example of this, at my $job we maintain a large code base in C++. All the engineers are fully capable and understand C++, yet we still waste time on endless debates because different people have differing preferences. So yes, the complexity of C++ is a major problem even for experts and Rust should do better.