How should we provide fallible float-to-int conversions?

I added a PR for lossless float to int conversions as TryFrom impls, the intent was for these to fail unless a float represents exactly the number it is being converted to and fits within the destination type. At the time I did not deem it complex enough to warrant an RFC, just another impl of TryFrom.

During discussion it was raised by @hanna-kruppe that these float conversions might be too special to put into TryFrom, so I’m raising the discussion here to gather more feedback from the community.

I would like to pose the following questions:

  • Through which mechanism should fallible float-to-int conversions be provided?
  • Should the default conversion (provided through TryFrom) coerce* the value in any way?

*: See the discussion regarding rounding policy below for options.

I’m reproducing the last comment I had in the PR since I think it’s useful for more context. This briefly discusses which rounding policy should be used for default conversion, and whether TryFrom should only provide lossless conversions in std.

Which rounding policy should be used?

Some GitHub searching results in:

  • trunc() as, 132 occurrences.
  • round() as, 172 occurrences.
  • ceil() as, 371 occurrences.
  • floor() as, 260 occurrences.

Note that since casting floats in Rust is unsound #10184, the cases which cast (and trunc) might not constitute an active choice. More “do something which makes this an integer”. This limited dataset leads me to believe that picking one policy runs the risk of making a large fraction of the community unhappy about the choice.

Making the conversion fail on inexact values can be a hint to the user that they should pick. This would be unfortunate to do at runtime, which would speak for instead having specialized conversion methods for floats (f32::to_i32_exact through a trait like ExactFloatConversion).

Outside of casting (which generally truncs), other strongly typed languages seem to force you to actively pick your policy as well.

From/TryFrom has been lossless so far

Many (all?) of the std trait implementations are lossless. Rounding a float is primarily a way to make members of one type fit into another. A similar but more extreme example of this would be to clampen or project an integer which is to wide to fit into a narrower sibling. The existing conversion methods do not attempt to “coerce” types in this manner.

It’s unfortunate that this is not stated clearly, since we are now having to deal with what users would expect out of the conversion traits, which is hard to quantify. But float conversions might also be so special that “lossless conversion” isn’t meaningful enough to warrant inclusion through TryFrom.

My personal observation is that the existing fallible conversion methods fail when information would otherwise be lost.

1 Like

Bump. Would really appreciate feedback/opinions from the internals community since the discussion is a bit stuck at the moment.

How about this as an invariant for TryFrom: Whenever TryFrom succeeds, converting the result back to the original type yields a value which compares equal (via PartialEq) to the original value?

While making TryFrom lossless is attractive, it would have surprising special cases. For example, converting a floating-point value to integer via ceil() and TryFrom would work for all values of reasonable magnitude except values in (-1,0). They’d round up to -0 which wouldn’t losslessly convert. Similar things occur in other number systems as well, such as rational numbers where in some systems there can be multiple representations of the same numeric value.

And if that’s the invariant, I think it does make sense to have a TryFrom from float to integer. It seems common enough to have floating-point values which are known/likely to hold integer values, which may also overflow, where TryFrom seems like a good fit.


This comment asks whether there’s a common enough use for TryFrom here that wouldn’t involve calling ceil/floor/trunc/etc. immediately before it. It’s a good question to ask.

There are uses, such as consuming data which is expected to be valid for i32 but is coming in in f64 format because it’s coming from something like JS. Or implementing a function like pow with an optimized path for the case where the exponent is integral. But these aren’t super common, and arguably they don’t need a generic interface either.

So if you do TryFrom, I still think using PartialEq makes sense. But I also see the argument not to do it.

1 Like

This is such an interesting observation!

fn main() {
    let a = -0.0f32;
    let b = 0.0f32;

    println!("{} / {}", a.is_sign_positive(), b.is_sign_negative());
    println!("{} / {}", a.is_sign_negative(), b.is_sign_negative());
    println!("{} / {}", a.signum(), b.signum());
    println!("{}", a == b);


false / true
true / false
-1 / 1

Because there are differences in how we negative and non-negative zero, I’m partial towards -0 causing a conversion error. Information would otherwise be lost.

If converting float -0 to int 0 were defined to fail, some obvious use cases would have surprising discontinuities. I mentioned the ceil() case above as one. Another would be if you assume that multiplying two integers together produces a value which you can successfully convert (when it doesn’t overflow): that will work until the seemingly ordinary case where one of the inputs is negative and the other is zero; their product would be -0, which would fail to convert. Consequently, I suggest that a TryFrom conversion from float to int that fails on -0 isn’t desirable.

So if you also don’t want converting -0 to succeed, then I think your best option is to not define TryFrom for float-to-int conversions at all.

Yeah, I agree that this discontinuity is highly undesirable. For all intents and purposes negative zero is intended to be numerically equivalent to zero.

I just want to elaborate on my initial thinking: a desirable property of lossless conversion would be to guarantee that for any function, we can guarantee that f(x) == f(T::try_from(U::try_from(x)?)?). This wouldn't be the case for negative zero. A simple example would be f(x) => x.signum(). More esoteric ones are atan2 (see this SO question).

You are asking for an isomorphism between integer values represented as two’s-complement integers and integer values represented as IEEE 754 floats, at least within the range where the float representation can convey an exact integer value. Unfortunately for you, the floating point system defined by IEEE 754 does not provide such an isomorphism. One example of that is precisely the negative zero of IEEE 754, which can be represented in ones’-complement integer arithmetic (such as was used in Univac and other computers of the 1960s and 1970s) but not in the two’s-complement integer arithmetic that is pervasive today.

(Having worked with both, I much prefer two’s-complement arithmetic.)


I think TryFrom has the wrong semantics for float-to-int conversions. In my opinion, rounding is not a good option, because the user should choose the rounding method, as suggested here. I think only doing lossless conversions also has the wrong semantics, because after rounding a finite float you are guaranteed to have a integer. I would suggest something akin to the following:

fn round(f: f64, p: RoundingPolicy) -> Option<i64>

(Of course, this could be made a trait and generic.)

I think those use cases are covered by rounding. If you know it's an integer, rounding is a no-op, independently of the rounding policy. An optimized path for pow could check whether rounding is a no-op or not, but I'm not sure whether this is a good idea instead of just using powi.

Edit: This is implemented in the conv trait.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.