I am new to Rust and had high hopes for Rust with respect to memory management, and wish I don't have to run a separate static analyzer to find such truncation issues.

Also I understand that the example is only for compile time, and such issues may happen during run time assignment too. I also wish there was a graceful handling of such truncations in the form of internal logging or inbuilt exception handling.

I don't know if this is too "separate" for you, but clippy has the style lint excessive_precision for this. Fundamentally though, Rust doesn't change the fact that floating point is lossy -- even something simple like 0.1 will lose precision, not exactly 1/10.

For floating point literals, yes - you can detect that there's excessive precision, as per the clippy lint @cuviper pointed out. Note that this doesn't tell you about all truncation - only about cases where you have decimal digits that cannot affect the binary floating point value. It won't, for example, tell you that 0.51 is truncated when converted to binary form.

This, however, is going to get really challenging really quickly; the reason we have floating point is that we want to approximate the real numbers. The set of real numbers is an infinite set, so mapping it into 2^{32} (f32) or 2^{64} (f64) possible values is impossible; instead, Rust uses the IEEE-754 floating point rules to define how we approximate all possible real numbers in a floating point value.

Because floating point is inherently intended to approximate the entirety of the reals in a finite set, there's always an error bound on every floating point output; one of the goals of IEEE-754 is that when you care about that error bound, you can use numerical analysis to determine the error in your algorithm. For example, addition in floating point usually truncates, and you can (assuming you meet the conditions, which IEEE-754 aims to make possible) calculate the resulting round-off error with an algorithm called 2Sum. Sometimes, that error will be zero (for example, if a = 0.5 and b = 0.25, then the error is 0.0 in f32 or f64), and sometimes it will be non-zero (e.g. 0.1 + 0.2 in f64 gets me an error of -0.00000000000000005551115123125783 - see Rust Playground for 2Sum in Rust).

What, precisely, do you think should be warned about?

That hard part here is basically finding something helpful and specific to say, rather than boiling down to "please read https://dl.acm.org/doi/10.1145/103162.103163 then allow the lint".

Because I think it's perfectly reasonable for someone to write, say,

even though that's way more precision than is needed, by just copying the literal from wolfram or similar. That will give an f64 as accurate as f64 can represent, which is exactly the behaviour one just has to expect from f64.

After all, using floating-point types is saying that you're fine with small relative error, and that you don't need small absolute error.

That said, I've sometimes pondered having a way to get the bounds on a floating point literal. Like π ∈ (3.141592653589793_f64, 3.1415926535897936_f64), rather than just the closest value, since std::f64::consts::PI is just the closest value, which doesn't say whether that's an over- or under-estimate.

It might sometimes be useful to have exact literals, where it is a compiler error if the literal decimal value is not representable in the destination type.

For the sake of example, suppose there's an attribute you can apply to float literals:

let x = #[exact] 1.0625; // 17.0 / 16.0
assert!(x * x == #[exact] 1.12890625); // 289.0 / 256.0
// ERROR: This literal is not representable in f32
// Note: The closest exact value is 0.100000001490116119384765625
let z = #[exact] 0.1;

For decimal that's problematic since you would end up with huge literals in the source for little reason, and I'd expect syntax like that could be misleading. ("Is this something I should add to my code to make it compute correctly?")

If Rust adds hexadecimal float-literals at some point, warning on inexact hex literals by default might be reasonable, since those are inexact only if there are too many digits (well, too many bits; the last digit would be restricted accordingly). The above example could instead look like:

let x = 0x1.1p0; // 1 + 1/16
assert!(x * x == 0x1.21p0); // 1 + 2/16 + 1/16²
// You probably don't want to write 0.1 as a hex literal,
let z = 0x1.99999ap-4;
// but printing it as such does nicely show the series expansion
// 1/10 = 1/2^4 + 9/2^8 + 9/2^12 + ...
// and also how the last digit has been rounded up to 10
// 0.1f32 = 1/2^4 + 9/2^8 + ... + 9/2^24 + 10/2^28
// This could warn that the value will be rounded
let z = 0x1.999999999999999999p-4;

Yes, in a sense the standard library is lying when it answers "What is the value of π?" with " std::f64::consts::PI", when in fact the best it could do is "Well, I can't say exactly, but it is greater than X and less than Y". One recent example of the resulting confusion is: `atan(tan(pi/2))` = `-pi/2` using `f32` · Issue #108769 · rust-lang/rust · GitHub (To be clear, it still makes sense for std to have such constants. Most code doesn't need to care.)

I would imagine there are already crates that do this, but one option would be to have a parsing function return something like

// Each variant contains the result of rounding to nearest,
// with the variant stating where the input was relative to that.
enum F64BeforeRounding {
Below(f64),
Exact(f64),
Above(f64),
}

with methods to get the value rounded in whichever direction you want, or as an interval.