Proposal: Add is_default() automatically to types that implement Default and PartialEq


a whole new trait is a fairly heavy change for the value

I have no experience in the process of introducing changes here so an example would be welcome to get why it is that way.

we saw from Ord::min that adding a new method to particularly-common traits like this trends to cause breakage – allowed breakage, mind you, but still breakage

I don’t know what happened there but this would be a new trait, so if I understand correctly it doesn’t apply.


I’m using serde_derive and I need a function/method that does exactly that thing (literally self == Default::default()). As Scott said nothing that couldn’t be implemented by myself but I thought this addition was small enough to not be too controversial except for not being considered useful enough.


you could just use

fn is_equal_to_default<T: Default + PartialEq>(t: &T) -> bool {
    t == &Default::default()

and if you want method syntax, you could make the trait yourself

trait IsDefault: Default + PartialEq {
    fn is_default(&self) -> bool {
        self == &Default::default()

impl<T: Default + PartialEq> IsDefault for T {}


If you want your own “auto” trait, you can do the following:

trait PartialEqDefault : PartialEq + Default
    fn partial_eq_default (self: &Self) -> bool;

impl<T : PartialEq + Default> PartialEqDefault for T
    // default /* with feature specialization */
    fn partial_eq_default (self: &Self) -> bool

I hope it helps. EDIT: @Yato beat me to it :smile:

Now, in general, using Option is a better more idiomatic solution (requires less trait bounds, and is as efficient as it can get). Hence another reason not to add such proposal to ::std or ::core.


Traits have a bunch of extra questions. Like “should it be dyn-capable?” and “what expectations should this have for generic use?” and “what library types should override the implementation?”.

Just adding an inherent method to a type is a much smaller design space.

There’s sortof two sides to that. Yes, it’s a new trait, so it won’t conflict unless it’s in scope. But bringing a new trait into scope with use std::whatever::DefaultPartialEq; is about as annoying as just typing == T::default(), so not having it in the prelude – and thus always in scope – drastically reduces its usefulness.


probably the “right” API would be to add it to Default as

fn is_default(&self) -> bool where Self: PartialEq<Self>

No idea the breakage implications or whether or not its worthwhile.


Now that I think about it, PartialEq is not a needed bound for is_default (take Option<T>). On the other hand, the bound allows to provide a default implementation. Hence this should be its own trait, and wait for specialization to give the default implementation.


Although, even with specialization we would need the lattice rule to get implement is_default for Option or types like it. I don’t think the lattice rule is being implemented in the current version of specialization.




I assumed this would only apply to types implemented in the same crate.

Actually my first instinct was to add the method to Default but I didn’t remember that implemented methods with their own trait bounds were possible.

I’m not sure I understand what you are saying.


If you make the trait, then you can implement it for any type.

It has been a while since I looked through the orphan rules, but this should be correct

you could implement is_default for Option<T> in the exact same was as Option::is_none


I’m not a fan of the Default trait, because it has next to no semantics. It is nowhere defined what the ‘default’ value is supposed to actually represent. Let’s look through the list of implementations:

  • For number types (integers and floats) it’s zero.
  • For char, it’s '\0', i.e. U+0000; char does not have defined addition or multiplication, so this isn’t ‘zero’ in the same sense as above.
  • For Option<_>, it’s None. (I can imagine Some(0) to be more useful starting value in some situations, e.g. if you’re using the None variant as an improptu NaN-like value.)
  • For str and slices, it’s the empty slice, and likewise for their growable equivalents: Vec and String.
  • More generally, for containers that can hold any number of items, it’s an appropriate empty container.
  • For Mutex<_>, RwLock<_>, ManuallyDrop<_>, it’s the default value of the underlying type, if it exists.
  • For Rc/Arc, it’s a singly-referenced default of the underlying type, but for Weak, it’s a ‘stillborn’ weak pointer with no backing storage.
  • For Cow, it’s a default owned value, even if the borrowed variant implements Default as well (which I imagine would be more lightweight.)

In a generic setting, things are more murky: the ‘default’ value cannot be said to have any particular properties. Is it the smallest possible value? (True for char and unsigned integers; sorta-true for containers; false for signed integers and floating-point.) Is it the identity element of +? (True for number types, Vec and String.) Does it represent ‘a lack of a value’, whatever it may mean? (True for containers, Weak and Option; false for floating-point types, since they the default isn’t a NaN.)

These examples have rather little to do with each other, other than some vague handwave-y notion of ‘zero’ or ‘emptiness’ (and delegating to the wrapped type). It may be what you need most of the time, but that’s primarily because you already know what the concrete type is in a given situation and what you need it for. With its semantics so nebulous, Default seems useful as little more than a typing aid.

These questions will be even more pertinent for the proposed is_default method: when you know that a given value is the ‘default’, what can you really say about it? As I point out, it’s not all that much; but I fear some people are going to assume more things about it than is actually guaranteed anyway (I have already seen someone use Default::default() as a generic zero), which means the proposed functionality is at risk of becoming a correctness hazard.


As may be. I find <T>::default() useful for initializing structs, and sometimes arrays within structs. That has nothing to do with naïve assumptions about <T>-related properties of that default.


The purpose of Default is just, I think, to provide an arbitrary valid/safe value to “occupy the memory” while an actual value is created to replace it (once ::std::mem::MaybeUninit is stabilized, it could provide cheaper-but-more-dangerous alternative, to stay as far away as possible from ::std::mem::uninitialized or its overlooked cousin ::std::mem::zeroed).

On top of that there appears to be one practical implicit, (that theoretically should not be relied upon although it works well in practice): that the Default value is among the cheaper (when not the cheapest) value(s) to construct.

This is strongly correlated with the following "interpretation of Default" (one that ends up being surprisingly accurate): emptyness

  • "empty Option<T>" = None. This default is the one that makes more sense.
  • “empty number” ~ number with no magnitude => 0
  • “empty collections” is the best example
  • “empty slice/str” also works.
  • for product types (e.g., struct, tuple structs & newtypes) it is the product of the recursive defaults, as expected,
  • general enums: this is the most controversial one, imho, since even false only is justified because of its historical ties to 0. So in the case of an enum, I see Default as a random discriminant.

So, besides the last enum situation, is_default could be used as a hacky way to check for emptiness.

As stated since the beginning of the thread, anyone who wishes to work with a special “default” case should use Option<T> instead (of T : Default), since it provides less ambiguous semantics.

EDIT: After seeing the offical documentation stance on this:

this “emptyness” intuition is wrong and dangerous. My bad.


If you wanted to have a MaybeInfinite numerical type, using Option<{integer}> is a lazy and very unreadable way to describe it:

enum MaybeInfinite {

impl Default for MaybeInfinite {
  fn default() -> Self { MaybeInfinite::Finite(0) }


Just to move this discussion forward (or just forget about it), is @withoutboats’s take on the idea unintrusive enough to be considered acceptable despite not being an essential feature?


His approach is good enough for me. However, as he pointed out, the breakage implications need to be assessed to determine whether such an addition is a non-starter.

In terms of utility, the only place I think I would use this is as part of a compression process before storing (or archiving) state, to decide whether an instance needs to be explicitly saved or can simply be later restored to the type’s default.


That implies that default is deterministic and doesn’t have NaN-like behaviors.

For example, the proposed implementation would not give correct result in such situation:

struct GameConfig {
   seed: usize,

impl Default for GameConfig {
   fn default() -> Self {
     Self {
       seed: random(),


I totally disagree that Default implies an empty value. Such interpretation makes it uncomfortably close to golang’s zero value concept, which in Rust should be done with Option, not Default.


True. The is_default() fn proposed by boats seems useless when the default is non-deterministic.