I wanted to ask if this addition is small enough to not need the whole RFC + discussion process.
The proposal is to add an auto trait that depends on Default and PartialEq that implements fn is_default() -> bool, which is a method to check if the value is the same as the default one for its type. I suggest the name DefaultPartialEq, just to have something to discuss on.
Maybe we’d add a convenience trait that depends on Default and Eq if it’s useful.
Let me know what you think.
Edit: Just to be more specific, the implementation would be something along these lines:
This seems like the kind of thing that will be an uphill battle to get into std, because
it’s a little convenience helper that can easily be defined elsewhere
a whole new trait is a fairly heavy change for the value
we saw from Ord::min that adding a new method to particularly-common traits like this trends to cause breakage – allowed breakage, mind you, but still breakage
As for the change itself, my first instinct is “why aren’t you using an Option?”
a whole new trait is a fairly heavy change for the value
I have no experience in the process of introducing changes here so an example would be welcome to get why it is that way.
we saw from Ord::min that adding a new method to particularly-common traits like this trends to cause breakage – allowed breakage, mind you, but still breakage
I don't know what happened there but this would be a new trait, so if I understand correctly it doesn't apply.
I'm using serde_derive and I need a function/method that does exactly that thing (literally self == Default::default()). As Scott said nothing that couldn't be implemented by myself but I thought this addition was small enough to not be too controversial except for not being considered useful enough.
Now, in general, using Option is a better more idiomatic solution (requires less trait bounds, and is as efficient as it can get). Hence another reason not to add such proposal to ::std or ::core.
Traits have a bunch of extra questions. Like "should it be dyn-capable?" and "what expectations should this have for generic use?" and "what library types should override the implementation?".
Just adding an inherent method to a type is a much smaller design space.
There's sortof two sides to that. Yes, it's a new trait, so it won't conflict unless it's in scope. But bringing a new trait into scope with use std::whatever::DefaultPartialEq; is about as annoying as just typing == T::default(), so not having it in the prelude -- and thus always in scope -- drastically reduces its usefulness.
Now that I think about it, PartialEq is not a needed bound for is_default (take Option<T>). On the other hand, the bound allows to provide a default implementation. Hence this should be its own trait, and wait for specialization to give the default implementation.
Although, even with specialization we would need the lattice rule to get implement is_default for Option or types like it. I don’t think the lattice rule is being implemented in the current version of specialization.
I’m not a fan of the Default trait, because it has next to no semantics. It is nowhere defined what the ‘default’ value is supposed to actually represent. Let’s look through the list of implementations:
For number types (integers and floats) it’s zero.
For char, it’s '\0', i.e. U+0000; char does not have defined addition or multiplication, so this isn’t ‘zero’ in the same sense as above.
For Option<_>, it’s None. (I can imagine Some(0) to be more useful starting value in some situations, e.g. if you’re using the None variant as an improptu NaN-like value.)
For str and slices, it’s the empty slice, and likewise for their growable equivalents: Vec and String.
More generally, for containers that can hold any number of items, it’s an appropriate empty container.
For Mutex<_>, RwLock<_>, ManuallyDrop<_>, it’s the default value of the underlying type, if it exists.
For Rc/Arc, it’s a singly-referenced default of the underlying type, but for Weak, it’s a ‘stillborn’ weak pointer with no backing storage.
For Cow, it’s a default owned value, even if the borrowed variant implements Default as well (which I imagine would be more lightweight.)
In a generic setting, things are more murky: the ‘default’ value cannot be said to have any particular properties. Is it the smallest possible value? (True for char and unsigned integers; sorta-true for containers; false for signed integers and floating-point.) Is it the identity element of +? (True for number types, Vec and String.) Does it represent ‘a lack of a value’, whatever it may mean? (True for containers, Weak and Option; false for floating-point types, since they the default isn’t a NaN.)
These examples have rather little to do with each other, other than some vague handwave-y notion of ‘zero’ or ‘emptiness’ (and delegating to the wrapped type). It may be what you need most of the time, but that’s primarily because you already know what the concrete type is in a given situation and what you need it for. With its semantics so nebulous, Default seems useful as little more than a typing aid.
These questions will be even more pertinent for the proposed is_default method: when you know that a given value is the ‘default’, what can you really say about it? As I point out, it’s not all that much; but I fear some people are going to assume more things about it than is actually guaranteed anyway (I have already seen someone use Default::default() as a generic zero), which means the proposed functionality is at risk of becoming a correctness hazard.
As may be. I find <T>::default() useful for initializing structs, and sometimes arrays within structs. That has nothing to do with naïve assumptions about <T>-related properties of that default.
The purpose of Default is just, I think, to provide an arbitrary valid/safe value to "occupy the memory" while an actual value is created to replace it (once ::std::mem::MaybeUninit is stabilized, it could provide cheaper-but-more-dangerous alternative, to stay as far away as possible from
::std::mem::uninitialized or its overlooked cousin ::std::mem::zeroed).
On top of that there appears to be one practical implicit, (that theoretically should not be relied upon although it works well in practice): that the Default value is among the cheaper (when not the cheapest) value(s) to construct.
This is strongly correlated with the following "interpretation of Default" (one that ends up being surprisingly accurate): emptyness
"empty Option<T>" = None. This default is the one that makes more sense.
"empty number" ~ number with no magnitude => 0
"empty collections" is the best example
"empty slice/str" also works.
for product types (e.g., struct, tuple structs & newtypes) it is the product of the recursive defaults, as expected,
general enums: this is the most controversial one, imho, since even false only is justified because of its historical ties to 0. So in the case of an enum, I see Default as a random discriminant.
So, besides the last enum situation, is_defaultcould be used as a hacky way to check for emptiness.
As stated since the beginning of the thread, anyone who wishes to work with a special "default" case should use Option<T> instead (of T : Default), since it provides less ambiguous semantics.
EDIT: After seeing the offical documentation stance on this:
this "emptyness" intuition is wrong and dangerous. My bad.
Just to move this discussion forward (or just forget about it), is @withoutboats’s take on the idea unintrusive enough to be considered acceptable despite not being an essential feature?
His approach is good enough for me. However, as he pointed out, the breakage implications need to be assessed to determine whether such an addition is a non-starter.
In terms of utility, the only place I think I would use this is as part of a compression process before storing (or archiving) state, to decide whether an instance needs to be explicitly saved or can simply be later restored to the type’s default.