Unifying slice::Iter and slice::IterMut

I’ve noticed that slice::Iter and slice::IterMut are defined by a copy-pasing macro: https://github.com/rust-lang/rust/blob/3ee936378662bd2e74be951d6a7011a95a6bd84d/src/libcore/slice/mod.rs#L3009-L3256

It occurred to me that if we define

struct IterRaw<T> {
    begin: NonNull<T>,
    end: NonNull<T>,
}

impl Iterator for IterRaw<T> {
     type Item = NonNull<T>;
}

we can then reuse IterRaw to define both Iter and IterMut:

struct IterMut<'a, T> {
    inner: IterRaw,
    phantom: PhantomData<&'a mut [T]>,
}

forwatd_iter_impls!(IterMut, &mut 'a T)

That is, macro-generated code will contain only simple forwarding and no actual logic.

Will this refactoring help to make code slightly faster to compile, because we don’t duplicate functions? Or is deduplication happening anyway?

Does it make sense to actually do this refactoring, or, given that the current code is working OK and is highly optimized, it’s better not to touch it?

3 Likes

As IterRaw<T> is generic, it’ll be monomorphized for each <T>, but if you use both Iter<T>(IterRaw<T>) and IterMut<T>(IterRaw<T>) with the same T the forwarding will be monomorphized for both wrappers but once for the raw iter.

Even if dedupe happens, it happens at the LLVM level and giving LLVM less to do will almost always be “better”.

As the existing implementation is macro-copy-pasted currently, achieving the reuse through the type system seems better to me (though I suppose it introduces another forwarding Iterator before aggressive inlining, so isn’t a surefire win).

1 Like

As a warning, last time this was touched Ralf found out that even seemingly trivial changes had material perf impact because slice iteration is so fundamental to most rust programs. (Hence the use of the normally-unneeded-and-bad #[inline(always)] and such.)

So while I support code simplification in general, in this particular case I might prefer the less-indirection of macros to make sure that LLVM continues to do as good a job as possible with it.

3 Likes

Welp, the best way to figure out if perf will be impacted is to profile, right? :slight_smile: It doesn’t seem like this would be a large change, so I think the cost of experimentation would be low here.

The cost of making the code change is small; the cost of the profiling may be large.

My point isn't that it should or shouldn't be done -- I honestly don't know -- but that the tradeoffs for things as core as Vec can be different from the ones one might normally make. For example, Vecs are monomorphized into so many crates that it might turn out that arguably-cleaner code in liballoc is worse because it makes for more code for LLVM to have to churn through, and thus compilation slower for users. It's just hard to measure and weight such things.

2 Likes

Ah yes, that was "fun"... for anyone who wants to know some more details, here's the >100 comment-long PR.

3 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.