We’ve regressed in removing null checks. Some performance oriented code like itertools::Stride and itertools::ZipSlices now fall prey to llvm being doubtful of nonnullity and inserting extra checks.
The end effect is that loops that otherwise looked good to llvm now are not recognized. When loop transformations like unrolling and autovectorization are not available, the impact on performance may be 5x or more.
libstd works around this by liberal use of assume, which is an unstable intrinsic, so regular stable rust has no real recourse. I suppose it’s a bug that we can fix though.
Most of the affected code uses <[_]>::get_unchecked() or similar. I’ve also seen some cases of having an iterator derived from &Vec compile file while derived from &[] it fell into this null check trap (code link)
Which would be equivalent to having the compiler spray the IR automatically. This shouldn’t be that hard to do. Just hard to do without adding assume at every usage site of a reference.
Anyway, at first try this was a bit hard to reproduce, I’ll go and look closer at my real world use cases. This code exhibits the null check problem in ZipSlices
The relevant function is dot. Hit release and then ASM. As you can see from the closure call, the code has to resemble the benchmark where I found the regression, the closure is surely part of it “forgetting” the pointer information. So that was a bit obscure, but I’m still sure it’s a regression. Also, closures happen in my real code too, I have to use some abstractions…
@ker there’s already a nonnull attribute, assume shouldn’t be needed
Edit: Assume helps on this testcase. But the regression is also present in current stable, is that possible?
assert!(!a.as_ptr().is_null()); is an example of a thing that doesn’t optimize out even if a is a slice. It also does not act like an assume.