Pre-RFC: Named arguments

FWIW, I'm also currently uncertain about named arguments. They seem like they're only really for this weird middle ground, since if something takes 1 argument it doesn't need to be named -- I'm totally fine with new and with_capacity instead of overloading new() and new(capacity: -- and if something takes 4 named parameters I bet it'll want 5 named parameters soon, and thus it should be passing a real type instead.

So yes, named parameters are good for functions which are used more than once and have, say, 2-4 parameters and have multiple parameters of the same type and it's not something (like [T]::swap) where the order of them is irrelevant.

Thus the insert example isn't great to me. Its type is &mut Vec<T>, usize, T, so the at is just noise in every case except T == usize. I don't need named arguments for .insert(0.0, 1), since it doesn't compile. And if I'm reading code, it's usually more like .insert(i, x), so it's not surprising which is which. (Basically, this is me agreeing with yigal that named arguments are super-important in Python, but much less so in Rust. Note that, thanks to us not having integer coercion, even .insert(0_u8, 0) is an error, unlike in many other languages where a u8 can coerce to usize.)

The example of "oh, but you could use with as the argument name for a builder" is incredibly unpersuasive to me. You could name every single parameter ever "with". We have types in rust, and the fact that you're passing a ConnectionOptions is the important thing here. Forcing people to type with: in front of it is entirely noise.

My strongest feeling is that I want to see ekuber's default field values RFC first, to explore a kind of "builder pattern lite" to see how that impacts things before going all the way to something as pervasive as named arguments. (Notably, that RFC won't propose changing name resolution, trait matching, etc the way this one does.)

That RFC would, for example, address the "and also can't handle non-optional arguments" objection in #12 above.

:+1: to this. Struct initializers using : is my biggest complaint with Rust syntax -- It would be so much nicer if = was for values and : was for types, rather than the confusing mix we have today.

One previous idea was RFC: Reserve `f(a = b)` in Rust 2018 by kennytm · Pull Request #2443 · rust-lang/rfcs · GitHub, in case that sparks something.

I'm not sure this is a meaningful distinction, since I could just say "it's not overloading, it's using the parameter types as part of the function name". And indeed, that's how overloading is actually implemented in things like C++.

But more importantly, if I can import two different functions with use yourcrate::foo;, then I'd say they're "overloaded", regardless of the exact details. Because that means their name is just foo, not foo(a:) and foo(b:).

There might be something to tease apart here.

If you're looking to "iterate quickly", then hopefully that's not breaking changes to the crate's public API. This is my biggest gripe with C#'s named parameters, actually, that all parameter names (on exposed things) are always a semver guarantee. And thus I'm glad to see the proposal here doesn't instantly make ever parameter name part of semver.

But for things that aren't part of the public API, I'm far more sympathetic to "ok, this function is kinda terrible, but whatever, it's just a module-private helper I call three times inside this file to reuse some logic".

That opens up some potential freedom, the same way as non_exhaustive does. We could, for example, provide a slightly-ugly syntax in the caller that would use the existing names, but that's only allowed for calling things defined in the same crate. So if you have a crate-local helper, sure you can call self.Handle(request, @@loadCookies = true, @@followRedirects = false); if you want. (Placeholder syntax, of course.) But you probably shouldn't make a function like that part of the public API. It's basically the same kind of idea as "sure, maybe you use a 4-tuple inside a function to store some stuff, but think carefully before you put a 4-tuple in a public API".

That would remove a bunch of the complications here, like signature changes, overloading, updating Fn traits, etc. But I can see some extra compilations there too, like maybe "crate" is the wrong pivot here -- something from a derive might be "in" your crate but that doesn't necessarily mean you should be able to use its parameter names, so maybe it's more like "same hygiene context" or something.

Hmm, that makes me wonder about hygiene for argument names. Can a macro make an argument name that can be used only from inside the macro, but not outside of it?


I did not. If this proposes full-fledged overloading, then it's "useful", but very dangerous. If instead it just proposes a pure syntax sugar, then it's not dangerous, but useless.

It's hardly a "standard". I already explained why that is a problem.

1 Like

Thanks everyone for the interest, that was way more than I expected ! I'm not a native English speaker so don't hesitate to point out spelling mistakes, they can easily slip by.

You all gave me way too much feedback to go through in an evening but I'll try to take it all into account and will post a message here when the pre-RFC text has been updated in response.


For historic context:

Objective-C is a shitty reimplementation of Smalltalk. IIRC, Apple bought nextstep which was implemented in Obj-C by people who wanted to use Smalltalk and its design philosophy and idioms in an OS. That precluded the use of the smalltalk binary image design, so they created this lower-level hybrid with smalltalk semantics and a C like compilation model. So the platform was designed with named parameters from the get go in its APIs.

Smalltalk has a very elegant design for a dynamic language and as part of that design instead of function names, it uses the parameter names as the "message name" (read: function name). And I agree, that it is very elegant and allows for some neat API designs. If you squint at Obj-C, you'd see some of this shine through due to its history. It is just a different approach with a different set of trade-offs with regards to API design.

Having said that, Obj-C itself has problems, as you noted yourself, and isn't the best candidate to draw inspiration from. I'm sure that to a large extent, the reason swift has this is due to its role as a replacement for Obj-C and the need to be compatible to the aforementioned platform APIs.

Rust however does have function names (it comes from a very functional background in ocaml) and mixing these two alternative solutions to the same problem just adds redundant complexity to an already large and complex language. It's just a redundant decision point in API design that would bring endless debates and have cognitive costs for everybody using Rust.

If anything, this comment is quite rude on your part. This topic has come up numerous times in the past and there is good reason why it was rejected multiple times. This RFC changes nothing in that regard. The expectation that people ought to invest their time to re-explain the same thing over and over and over is rude given that the onus is with those wanting to change the status quo.

The example above changes nothing in the calculus here. The idiomatic ways for API design in Rust to solve this are to use a plain old struct literal that groups all the parameters together or use a builder pattern when a more complicated construction logic is needed. Having as many parameters in the above example feels smelly to me regardless of their names' status.

For me, the most readable solution would be to introduce a Filters struct around these parameters (all together, and not individual newtypes as in the strawman argument above).

Moreover, I'd say that this actually calls for another feature entirely, inferred struct literals. i.e

let num: i32 = 12; // inferred already to: 12_i32
let f: Foo = _ { a: 12 } ; // this literal could be likewise inferred to: Foo { a: 12 }

This could than be used to make the example look like the following snippet to satisfy the OP:

compile_opts.filter = ops::CompileFilter::new(_{ // inferred from the function signature
    library: LibRule::Default,
    binaries: FilterRule::All,
    tests: FilterRule::All,
    examples: FilterRule::none(),
    benches: FilterRule::none(),

This actually fills a gap in the current design instead of introducing an alternative design. It is an intuitive and composable solution. We already have inferred numeric literals and this is a natural extension of that, so it matches existing expectations and keeps the "there's one way to do it" best practice.


What I'd encourage you to do is concentrate on the "what problem is this solving?" part. It's awesome that you've written up all the details for how this would specifically work, but mostly you're going to get motivation-level feedback right now.

For example, I commented about :-or-= above, but I'd actually suggest you ignore that part for now, because it's not that important, really. It could meet its goals with either syntax, it's front-end so it could be changed relatively easily, and it doesn't impact harder questions like how it works with Fn or use or what have you.

That will hopefully help you avoid too much redrafting, and maybe focus the discussion more. For example, it might be easier to discuss the goals rather than overloading-vs-defaults as mechanisms for doing different things.

Bonus points if you can phrase the "here's a problem worth solving" part entirely without ever saying "named arguments". I likely agree with "gee, I wish there was a better way to _______" even if I'm not currently convinced by named arguments as the specific solution.


I disagree on this point. Passing an argument to a function is already a kind of assignment. The variables in the function call are just copied/moved to the function arguments exactly the same way as an assignation with the equal sign.


In the past, when I've found myself desiring named arguments in Rust I've ended up using a macro to emulate them. Laying out how this can be done, and the limitations of this approach seems relevant to this discussion.

The technique

Given the following macro definition:

macro_rules! compile_filter {
        library: $library: expr,
        binaries: $binaries: expr,
        tests: $tests: expr,
        examples: $examples: expr,
        benches: $benches: expr $(,)?
    ) => {

Then the following code creates the expected CompileFilter and assigns it:

compile_opts.filter = compile_filter!(
    library: LibRule::Default,
    binaries: FilterRule::All,
    tests: FilterRule::All,
    examples: FilterRule::none(),
    benches: FilterRule::none(),

If all calls to CompileFilter::new go through the macro, then the argument order only needs to be gotten correct once.


  • Using the correct order is required, unless one is willing to write (or write code to generate) n! match arms. And even then, that may cause compile time to increase significantly.
  • Optional arguments are possible, but the only (relatively) simple methods I'm aware of for them involves a macro internal variable per field and/or an arguments struct.
  • The compile errors for small mistakes in the format are often not very informative. "no rules expected this token in macro call"
  • All generic reasons one may have to not want to use macros: Harder to implement good IDE support, extra friction in publicly exposing them, etc.

I suppose, given one wants to encourage people to be considerate when exposing named arguments, the friction associated with exposing macros to other crates could be considered a feature.

1 Like

I don't see how adding friction for the users of the API is desirable.

That is actually what I had in mind when I wrote that comment. However, I believe ekuber's RFC was never submitted (or finished?), which is unfortunate, because it's such a useful feature.

A better question is why would you allow that. The make of the function put the parameters in a specific order and with specific names so that the usage site reads naturally. Changing the order will make the reading wrong.

Function calls sometimes have one (or more) very long argument, and in that case it (IMO) usually improves readability to put the long one last. As an API designer, I try to order my functions' parameters to gain that benefit, but sometimes the best order by that metric varies by call site.


Sounds like a code smell. If the argument is too long it ought to be extracted to a variable with a meaningful name (which I sincerely doubt would be exceptionally long)

1 Like

We can have poor man's structural records tomorrow. As a library. I brought up this technique in URLO, thread Named arguments patchwork, which was not entirely serious.


struct Arg<T, const NAME: &'static str>(T);

macro_rules! record {
    ($($arg:ident: $typ:ty),*) => {
        ($(Arg<$typ, {stringify!($arg)}>),*)
    ($($arg:ident),*) => {
        ($(Arg::<_, {stringify!($arg)}>($arg)),*)
    ($($arg:ident = $value:expr),*) => {
        ($(Arg::<_, {stringify!($arg)}>($value)),*)

fn from_polar(args: record!{radius: f64, angle: f64}) -> (f64, f64) {
    let record!{radius, angle} = args;
    (radius*angle.cos(), radius*angle.sin())

fn main() {
    let (x, y) = from_polar(record!{radius = 1.0, angle = 0.0});
    println!("({}, {})", x, y);

Edited so as to adhere to community guidelines.

Firstly, I agree that having inferred struct literals would be a more robust solution.

But considering that:

  1. This is OP's first pre-RFC.

  2. The "typo" you refer to is not a typo.

  3. You can misspell struct field names just as easily, and then you can't fix the interface without violating backwards-compatibility either.

  4. You argue that using meaningful types is "correct design", yet standard library clearly doesn't use these consistently (using usize for indices being highly non-semantic, even limiting in that a special type could handle indexing from the end). I therefore feel like the main motivation isn't "correct design", but inertia.

  5. You argue that languages where named params make sense are all dynamically typed, clearly not considering Swift, C#, or Scala.

It's you being rude, not the OP. That you feel insulted by this post being duplicate of older ones - and complaining about it instead of eg. providing the links to the older posts - is something I don't get.

For information, some other posts about named arguments:


Moderator note: Please refrain from personal attacks. You can take issue with someone's behavior without making unkind remarks about their personality.


I really would just like to make a bit of a meta remark at just how refreshing it is to see such a well-written RFC written by somebody who has clearly read all of the past discussion, and who has new ideas to bring to the table on a topic that has been mostly running around in circles for years.

The proposal here is extremely cohesive and has a well-defined scope. I can see how each design decision and limitation (e.g. lack of reordering, forced usage at callsites) makes sense in the context of the rest of the design. At first I was unsure why overloading is a part of it, but as I read through it became clear that this was necessary to allow backwards-compatible adoption by existing APIs. You've certainly found a local maximum in the design space, and I don't think any small part of it can be easily tweaked without changing a bunch of the rest.

There are not any concrete criticisms I have been able to come up with which are not already at least acknowledged by the proposal. (For instance, the fact that callers may be forced to write name: name.)

So, about the proposal itself? Well, considering the fairly limited scope of the problems it is designed to solve (especially in comparison to other named argument proposals):

  • Providing library authors with an easier (for the author and for the user) alternative to newtypes/builders for a function which faces the problem of, "it can be unclear what this bool parameter is at callsites."
  • Letting a library author provide ::new(bar: ...) and ::new(foo: ...) rather than ::new_bar and ::new_foo

I feel the benefits are unlikely to outweigh the costs in additions to the language:

  • Addition of parameter names to the type system (in particular to fn types and Fn traits).
  • Argument names playing a role in function name resolution.
  • Syntax for:
    • defining named parameters
    • using named arguments
    • qualifying overloaded functions

and thus, I generally wouldn't expect to see it accepted in its current state.

The idea of "named arguments" brings with it a lot of baggage and expectations from other languages, and I get the feeling that many users would be excited to hear "rust is getting named arguments" only to then be frustrated and confused by the restrictions here. "Overloading via named parameters" is perhaps a more direct description of the idea.


This proposal really isn't "Overloading" in the sense that that word normally implies. It isn't overloaded based on the types of the parameters; rather, the function name is effectively the name declared for the function plus the names of the named parameters.

I think if every place where this RFC mentions "overloading" it was changed to "function name resolution" and emphasized the fact that the function name really includes the named parameter names and there really isn't any "Overloading" of the same function name, then I think it would be less controversial. I really think a lot of people are getting hung up on "Overloading" when it is really a non-issue.


Whether you call it "overloading" or not, it does make function name resolution a lot more complicated (e.g., use module::foo can now bring multiple functions in scope), it seems like a challenge to support nicely in rustdoc, and it means that creating a function pointer by just writing module::foo no longer works.

Right now, the syntax and semantics for f(args) is very compositional in the sense that f is an expression that evaluates to a function type, and that's all there is to it. This proposal breaks that property, and means that args needs to be taken into account to figure out what f will resolve to. It makes no difference to me whether it is the types or names in args that affect this. I think it is fair to call all of that "overloading" and like many others here I am concerned that this is a misfeature. The argument part of the function call syntax has no place affecting the function part of it in my view.

Furthermore, isn't it the case that argument-name-based-overloading/resolution could be added later? So, in the interest of having a smaller RFC with fewer controversial parts in it, and adding fewer new complexities to the language at once (I mentioned some of the challenges above), I think this RFC would fare a lot better without the overloading part.


That's a big part of what overloading is. The objection isn't to the terminology, it's to the concept.


What is objectionable about the concept:

parse(from: some_str, into: some_structure)?;
parse(from: some_str, into: some_structure, with_filter: some_filter)?;



because, to my reading, that is what effectively this RFC proposes. I just don't see how that is anything like overloading based on types. It is clear that the first option reads better and is clearer in intent. I can't think of any reasonable way this can be found objectionable other than a reflexive abhorrence of "OVERLOADING" that isn't justified.


The inability to just write the name of a function to reference that function, whether in code, documentation, or other communication. The need for a new syntax to reference specific overloads of a function based on their parameter names. The need to support and use that syntax in any new places that want to reference function names. The added complexity for future use and stabilization of function trait implementations.

Also, there's no need to use a name like parse_from_into and spell out all the common arguments in the name; that could just be parse.

See above. And in general, please don't assume that people who don't share your position have a "reflexive abhorrence ... that isn't justified". That kind of dismissal doesn't invite answers or lead to understanding; it just characterizes others as incomprehensible eldritch roadblocks to your unassailable position.