impl<A> Sum<A> for A where A: Default + Add<A, Output = A>?

I found that std::iter::Sum was implemented manually for many types, e.g. all integers and some numeric-like types, but it will not work for custom types which implement only Add and Default. Why not implement directly?

This is my implementation in rust playground

use std::ops::Add;

pub trait Sum<A = Self> {
    fn sum<I>(iter: I) -> Self
    where
        I: Iterator<Item = A>;
}

impl<A> Sum<A> for A
where
    A: Default + Add<A, Output = A>,
{
    fn sum<I>(iter: I) -> Self
    where
        I: Iterator<Item = A>,
    {
        iter.fold(A::default(), Add::add)
    }
}

How about add it to std lib?

All data types in std lib uses the same trivial implementation for Sum, but it seems that it will be a break change unless that specification was stabilized.

This is not possible to add, because it’s a blanket impl. Adding blanket impls for traits is a breaking change, and Rust’s standard library can’t make breaking changes.

To explain why it’s breaking: Any existing crate that contains a Sum implementation for a Default + Add<Output = Self> type would stop compiling due to conflicting implementations.

E.g.

use std::iter::Sum;
use std::ops::Add;

#[derive(Default)]
struct Foo;
impl Add for Foo {
    type Output = Foo;
    fn add(self, other: Self) -> Self {
        Foo
    }
}
impl Sum<Foo> for Foo {
    fn sum<I>(_iter: I) -> Self
    where
        I: Iterator<Item = Foo>,
    {
        Foo
    }
}

compiles successfully, while adding this kind of implementation to your playground results in

error[E0119]: conflicting implementations of trait `Sum` for type `Foo`
  --> src/main.rs:70:1
   |
9  | / impl<A> Sum<A> for A
10 | | where
11 | |     A: Default + Add<A, Output = A>,
12 | | {
...  |
18 | |     }
19 | | }
   | |_- first implementation here
...
70 |   impl Sum<Foo> for Foo {
   |   ^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `Foo`
3 Likes

Another interesting point: It’s also possible to do the opposite, implementing Add<Output = Self> and Default in terms of Sum<Self>. The default value would be [].into_iter().sum(), and adding a and b would be [a, b].into_iter().sum().

I would personally advocate for finding/designing some language features that would allow you to create such implementations for concrete types on an opt-in basis without any boiler-plate; but that’s probably a separate discussion. Kind of like trait implementation “templates” that can be defined and then used. (Obviously you can use macros today, but a dedicated feature could be properly type-checked when defined, not only when used, and could give better error messages.)

3 Likes

It may be resolved by specialization?

You're also assuming that Default returns zero for all possible types, but there's no reason why that must be so.

10 Likes

I'll note that we've had Iterator::reduce for 8 versions now, so there's always

iter.reduce(Add::add).unwrap_or_default()

But I'd say in general that one might as well just send a PR for those types implementing Add to implement Sum too. There's just not that many of them.

Ooh, that's really clever. It means we have a stable version of num_traits::identities::Zero - Rust, via iter::empty().sum()!

3 Likes

Would a trait bound work for this, such that implementing a specific marker trait is the opt-in to a blanket implementation? So a pattern like:

// definition:
trait Trait {
    fn method(&self);
}
trait DeriveTrait: TemplateBounds { }
impl<T: DeriveTrait> Trait for T {
    fn method(&self) {
        // template impl can use its required bounds
    }
}

// usage:
impl DeriveTrait for Foo { }
// OR  (but not both, as that would error with conflicting impls)
impl Trait for Foo {
    fn method(&self) {
        // custom impl doesn't need the same bounds
    }
}

For example, the code from the OP modified: Rust Playground (changed lines: 9, 13, 53) As far as I can see (probably not far enough), this avoids the breaking change -problem because no existing type impls the new trait.

This does, however, feel like it would be an unnecessary complication in the case of std::iter::Sum.

I'd love to have some non-derive way to opt into things, but just adding a subtrait doesn't solve it -- there's still an overlap problem.

Hopefully specialization will help with some of this by having defaults. Or maybe there's some kind of mixin feature we could add.

I don’t think specialization is necessary, and if it’s not necessary it should obviously be avoided. It seems to be a common idea around language-design discussion to quickly jump to the idea “maybe specialization helps” in all kinds of situations.

I’ve somewhat recently partially written down an idea that can help here too, originally posted to address a different point where I saw someone suggesting specialization as a solution to a problem where I didn’t think that specialization is appropriate. It didn’t get feedback where I posted it (probably because it was semi-off-topic) and I intend to eventually do some better explanation / make a more proper proposal elsewhere. Nonetheless feel free to take a look: zulip / archive (no login required).

To give some code example, I imagine something like

trait template SumByAdd: Add<Output = Self> + Sized {
    fn zero() -> Self;
}

impl<T> Sum<T> for T
where
    T: SumByAdd,
{
    fn sum<I>(mut iter: I) -> Self
    where
        I: Iterator<Item = A>
    {
        iter.reduce(T::add).unwrap_or_else(SumByAdd::zero)
    }
}

trait template SumByAddDefault: Add<Output = Self> + Default + Sized {}

impl<T: SumByAddDefault> SumByAdd for T {
    fn zero() -> Self {
        T::default()
    }
}

can be defined in any crate, and users of that crate could then write

#[derive(Debug, PartialEq, Eq)]
enum DataSize {
    Fixed(usize),
    Variable,
}
impl Default for DataSize {
    fn default() -> Self {
        DataSize::Fixed(0)
    }
}
impl Add for DataSize {
    type Output = Self;

    fn add(self, rhs: Self) -> Self::Output {
        use DataSize::*;
        match (self, rhs) {
            (Fixed(x), Fixed(y)) => Fixed(x + y),
            _ => Variable,
        }
    }
}

impl SumByAddDefault for DataSize {} // implements Sum

or e.g.

#[derive(Debug, PartialEq, Eq)]
enum DataSize {
    Fixed(usize),
    Variable,
}
impl Add for DataSize {
    type Output = Self;

    fn add(self, rhs: Self) -> Self::Output {
        use DataSize::*;
        match (self, rhs) {
            (Fixed(x), Fixed(y)) => Fixed(x + y),
            _ => Variable,
        }
    }
}

// implements Sum
impl SumByAdd for DataSize {
    fn zero() -> Self {
        DataSize::Fixed(0)
    }
}

Note that such templates are implement-only; you can’t write them in bounds (outside of the defining trait) or call any of their methods. Thus switching from one template to another or a manual implementation, or the same template in a semver-breaking new version of the crate defining the template, etc… all isn’t a breaking change.

The writeup I linked also considers sealed traits, and the potential to have templates and sealed traits of the same name; this is irrelevant for this case.


You could also use this to e.g. provide a template with an add and a zero method that automatically implements, say, Sum and Add and Zero (from num) for you. Perhaps if you want (with a different template; i.e. everything is always explicitly opt-in), for Copy types, even including implementations of Add/Sum for &T.


Note that in my mind, the necessary extensions to coherence-checking / orphan rules that make this work should apply to all traits, not just trait templates. The “template” just implies the “implement-only” property described above that’s necessary to get the right stability guarantees when using such templates.

I think it depends on the case. For example, I think having a default impl<A: Sum + Copy> Sum<&A> for A makes sense to offer via specialization. Requiring that people implement or explicitly opt-in to that one just feels like busy-work to me, since if .copied().sum() works then .sum() should just work too.

But for more subtle things -- like here, where Default doesn't actually provide the guarantee needed -- I agree that using specialization is probably the wrong approach, and some sort of direct opt-in makes more sense.

Interesting point. However, once opting into Copy-based support for Sum<&Self> would only be the “overhead” of choosing to use one trait template over another trait template, the amount of busy-work for this opt-in is close to zero anyways.

Also there’s the problem that default impl<A: Sum + Copy> Sum<&A> for A is a very specific implementation that’s not specialized by every kind of currently allowed implementation anyways. E.g. an implementation impl<A: SomeTrait> Sum<A> for MyType (for a type MyType: Copy) overlaps with but doesn’t specialize it. I’m not even sure if there are any concrete plans at all yet for supporting something like this with specialization yet. So it’s not only asking for specialization, it’s even asking for not-yet existing extensions of specialization. Even if we ever get a sound specialization feature that also includes such extensions, and where restrictions for soundness don’t make it impossible anyways to use it with the existing Sum trait, then it still seems questionable whether the complexity that having such a specializing/specializable/overlapping blanked implementation of Sum is going to be worth it.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.