Format specifier for non-scientific notation?

In Rust 1.58, the debug output format for float primitives changed. Now, if they are above or below a certain threshold (very small or very large), they are written using scientific notation. Debug output formats are not guaranteed stable, so this is fine in terms of regression and I imagine is going to be helpful for general usage. However, I believe that this has made it impossible to get the old non-scientific output, whilst there are two specifiers ('e' and 'E') that may be provided to print scientific notation directly. Might it be possible to add a specifier to disable this behaviour and just print the digits out?

My personal motivation for asking is that the old behaviour makes it easier to compare and reason about diff values side by side:

    left: `1.0`,
   right: `1.0000002`,
abs_diff: `0.00000023841858`,

Compared with:

    left: `1.0`,
   right: `1.0000002`,
abs_diff: `2.3841858e-7`,

(Print float values in asserts in non-scientific format · Issue #21 · jtempest/float_eq-rs · GitHub)


You might be looking for

Ah, right, precision nearly covers it but I suppose what I'm looking for is variable precision output without trailing 0s.

Debug output doesn't support e or E specifiers. Display hasn't changed and uses no scientific notation. Hence I don't quite get your point that

If using e or E (for using LowerExp/UpperExp) is an option for your use-case, then removing the ? (in order to use Display instead of Debug) will be an option, too. I.e. your float_eq crate could just use Display implementations, right?

Ah, I hadn't actually realised that LowerExp / UpperExp were distinct from Debug, which means that yes, that point doesn't make sense. In terms of library output, it's from assert! style macros so I'd want to still use {:?} since many, if not most, of the types it supports don't have Display implementations. I suppose really my ask is then to have a precision mode that allows for printing digits as required to bring back the possibility of using the old behaviour? Since precision is something that Debug does support.

I do appreciate the change that something like

fn main() {
    let x = (f64::MIN_POSITIVE, f64::MAX);

does now produce legible output. Since Debug doesn't currently accept any flags that could specify whether or not scientific notation should be used, I feel like the new behavior now is better than the old, and without extending the set of flags that Debug accepts and Formatter offers, I don't see how the situation can be improved.

If anything, I feel like the fact that specifying precision still makes it switch back to non-scientific formatting feels slightly inconsistent; maybe there's a different behavior that could make sense? :thinking:

In light of the fact that Debug does also, somehow, accept and handle (for built-in integer types, using non-public API) the combination of x with ? does however mean that it's not completely out of the question that e could be supported, too, in which case your point about not being able to specify a "never scientific notation" mode would become valid.


As I mentioned in the PR that made this change, this decision was motivated by backwards compatibility concerns that existing code which uses a precision may be significantly more likely to be adversely affected by this change than code not using a precision, especially when used on &[f64] or similar homogenous containers. But I don't have any actual data on this.

The effect would be even worse if adjusting the precision had an impact on the upper threshold, which is a common property of %g in other languages. 100.000 might become 1.00e2.

On numerous occasions such as this, I have lamented the lack of a :f/Fixed format in Rust. IMO, the existing behavior ought to be spelled {:.3f?} instead of {:.3?}...


Also FWIW I think the change to the default behaviour of {:?} was a good one! I just also think that the old behaviour has uses as well if explicitly requested.

FWIW, Display and Debug have at least one other difference: Display prints 0.0 as "0" but Debug prints it as "0.0".

I too would like to be able to disable scientific notation. My usecase it very simple, i want all numbers to have the same xxx.yyy format so they're easy to copy using multiple carets (with ctrl+shift+arrows, i don't want or need the same number of digits on each line). I can't be the only one who wants an easy way to get this consistent format. (I don't think using Display and setting precision would work for me because i'd like to avoid trailing zeros.)

In my opinion this issue is caused by a difference of purpose: Debug is meant to convey some runtime value to a programmer in text form in a context where the type and value are in question (i.e. during debugging). The purpose is therefore to represent the value without ambiguity while not wasting space.

The purpose of your library is a different one, namely to give visual cues based on the knowledge that you’re dealing with floating-point numbers. The top example nicely shows how the brain can absorb one form much quicker than the other, which involves the study of human perception and psychology.

For this reason I think the {:?} format specifier cannot solve both cases at the same time. Instead, its stated purpose in the library documentation is unambiguous textual representation, which is valuable and should be kept. Other number formatting schemes — of which there are many! — can be added via simple functions or wrapper types that implement Display, but their usage as well as their API design depend on the context. The standard library should offer simple tools that depend very little on the specific context.

As a side remark, the aforementioned point about Display representation of 0.0 and −0.0 is a quirk that should be fixed in my opinion. But then again I’d want to use actual minus signs for prefix and infix operators (since we also use an actual plus sign for addition), not hyphens.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.