Justification for Rust not Supporting Function Overloading (directly)


There was recently a Reddit post regarding implementing function overloading through “clever” use of Traits. My personal feeling is that it was demonstrating a Rust anti-pattern, in particular, as well as a general anti-pattern. I personally find the lack of Function Overloading in Rust to be the correct choice and think “teaching” how to emulate design patterns that Rust specifically has decide aren’t warranted is not a good idea.

I suggested the original poster add some comment regarding the Rust choices and perhaps some links to discussions regarding the decision in Rust. Little did I know, such discussions aren’t easily found. I couldn’t find anything directly relevant in the current RFC’s (at least by searching) and nothing relevant is showing up through Google-Fu. My guess is these discussions were solidified before the current RFC process was adopted.

Does anyone have any links to discussions, RFC’s, documents that provides reasoning about why this choice was made in Rust. It would be nice to have something like the above referenced Reddit post, and the Blog post it references, reference such discussions and/or justifications.


I found this blog post: https://blog.rust-lang.org/2015/05/11/traits.html

Overloading. Rust does not support traditional overloading where the same method is defined with multiple signatures. But traits provide much of the benefit of overloading: if a method is defined generically over a trait, it can be called with any type implementing that trait. Compared to traditional overloading, this has two advantages. First, it means the overloading is less ad hoc: once you understand a trait, you immediately understand the overloading pattern of any APIs using it. Second, it is extensible: you can effectively provide new overloads downstream from a method by providing new trait implementations.


There probably isn’t much about it explicitly, because it’s a direct consequence of a different design choice: the desire to not have monomorphization-time errors in generics.

Take a function like this:

fn foo<T>(x: T) { bar(x) }

Today, that’s easy:

  • the call to bar is to the bar that’s in-scope at the definition of foo
  • bar must be a function that accepts anything, since T is unbounded

Imagine overloading existed. Both would be way harder:

  • Which bar are we calling? Is it the bar in-scope right now? If there are two in-scope right now, which of them? An overload of bar that doesn’t exist yet? A bar in the same module where the concrete generic argument is defined?
  • When is foo even valid? Is it “whenever there’s a bar somewhere that happens to match what we need”? Do we need to like all the types for which bar exists in the bound somewhere?

Explicitly indirecting through the trait, rather than having everything be a potential indirection point, makes everything was easier, both to understand what’s happening and to implement.


So, are you saying that “Easy Monomorphization” was deemed more important and useful than “Non-Generic, Variable-Arity Function Overloading”? I would agree with that conclusion, but, it would be nice to point back to some design discussion early in the language where the pros and cons were hashed out and the decisions made.


Thanks for this. Pretty much exactly what I was looking for.


I think the framing is slightly off. I’m not aware of Rust ever having function overloading, so the question is not “why does Rust not support function overloading” but “what would allow Rust to support function overloading?” Features only get added if there’s a proposal, with justifications. I can’t remember, but I’m pretty sure there was never a proposal, and so therefore, no rejection of a proposal either.


tldr: Generics and function overloading don’t play along well. Quoting boats here: