Average function for primitives

In hardware, signed division is implemented as a.abs() / b.abs() with signs reinserted afterwards. I suspect backwards compatibility is the main reason why that behaviour was retained in today's processors, as it would be relatively cheap in hardware to also tweak the values, apart from the signs, following the unsigned division. So for the signed division operation with today's processors, (the idiv instruction), rounding to zero is what the hardware provides.

But I agree, I hate division rounding to zero and much prefer flooring division. Division rounding to zero also introduces a kind of discontinuity around zero. In digital signal processing, I'm always on the lookout for that, as using division-with-rounding-to-zero increases the introduced quantization noise power by a factor of four over flooring division.

1 Like

"/" doesn't always get translated to idiv. In particular, dividing by a power of 2 is translated to a shift, which rounds down. The compiler then has to insert extra instructions to adjust it for negative numbers in order to round towards 0 instead of down. Which is particularly bad, because dividing by a power of 2 is supposed to be fast. idiv is already slow, so having to adjust it in software wouldn't add much overhead percentage-wise. Having to adjust a shift adds a significant overhead percentage-wise.

Division by any compile-time constant, not just powers of 2, typically doesn't use idiv.

Example where LLVM can generate better code for rounding down than for rounding towards 0: https://rust.godbolt.org/z/cK6GsvzdP

This just shows that choosing peculiar language semantics based on hardware peculiarities is often a bad idea -- not only because hardware changes, but also because the translation to machine code is sometimes surprising and doesn't map 1-1 to machine instructions in the obvious way.

I think flooring would be mathematically cleaner than rounding towards a, for the reason you mention: a "discontinuity". If you plot the graph of midpoint(7, x), it will look weird around 7 with rounding towards 7 (three values of x correspond to the same output, rather then two). This might potentially cause surprises, similar to division with rounding towards 0.

Say you're programming a game, and you have two characters moving around, and for some reason you want to display something half way between them.

If you use C++ midpoint for this, there will be a strange artefact when they cross the same x or y coordinate. If the two characters are moving at constant velocity, the midpoint will jump by half a pixel (on average) the moment they cross.

Adobe postscript at some point had a bug with letters moving up or down by 1 pixel because of division rounding towards 0.

So this would be an argument for flooring.

Indeed it could be expected to have midpoint(0,x)=x/2. Thus, if we are rounding simple divisions towards 0, it could be expected to have midpoint round towards the first argument. It would probably have been better to have both round downwards, but it seems too late for the simple division.

This is a great observation.

I think it suggests a possible direction for it too -- it could be a method on Range. (a..b).midpoint() on integers can have the postcondition that it returns a value inside the range. That could have an implementation like b.checked_sub(a).map(|d| d/2 + a). (With some extra details to get wrapping right, of course.)

4 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.