Rounding to the nearest representable number is the problem. Changing the rounding mode is specifically to prevent it from doing that.
There are two different "rounding" steps that you are confusing:
- a / b needs to be rounded down (in the case of b > 0) to the next smaller integer (not to the next smaller representable number).
- The resulting integer needs to be rounded (up or down as appropriate) to the closest representable number.
Changing the rounding mode to "down" and dividing would in effect cause both steps to round down, which is not what is wanted.
Actually we could change the behavior of div_euclid from rounding to the nearest to just round down, which is more clear.
It would be inconsistent with all the other basic floating point operations which calculate the exact value and round to the nearest. I don't see any reason to have such different behavior for div_euclid.
It would still be very convenient for handling the common case where the integer quotient is not huge. That should look something like:
let q = self.div_round_down(rhs.abs());
// Below this threshold, every integer is representable,
// so the above rounding can't have skipped the correct integer
if q.abs() < 16777216.0_f32 {
return q.floor() * rhs.signum();
}
// Handle the cases that may need rounding ...
At least on targets like RISC-V with instruction encodings for fixed rounding modes, this should be very efficient.
I have a pull request that fixes this in all cases.
I agree that using a non-default rounding mode for the easy cases might be more efficient, but I am afraid LLVM doesn't support this.