0.1 + 0.2 = 0.30000000000000004 a cliche question

0.1 + 0.2 = 0.30000000000000004

i know this is a cliche question

i first see some you tube videos discussing the floating point problem in java-script then i try this on rust

language syntax output
Rust println!(0.1+0.2); 0.30000000000000004

i also find same problem

then i try bunch of other programming language like

language syntax output
python3 print(0.1+0.2) 0.30000000000000004
java script console.log(0.1+0.2); 0.30000000000000004
node js console.log(0.1+0.2); 0.30000000000000004
deno js console.log(0.1+0.2); 0.30000000000000004
Kotlin println(0.1+0.2) 0.30000000000000004
java System.out.println(0.1+0.2); 0.30000000000000004
ruby puts 0.1+0.2 0.30000000000000004
swift print(0.1+0.2) 0.30000000000000004
elixir IO.puts(0.1+0.2) 0.30000000000000004
julia 0.1+0.2 0.30000000000000004

i just google it to find the reason for this problem most of them are discussing about iEEE standards

i know this is not a big problem since we are not sending rocket to space but can any one explain why this is happening

but when i use go lang i got output 0.3

language syntax output
go fmt.Println(0.1+0.2) 0.3
c# Console.WriteLine(0.1+0.2); 0.3
r lang print(0.2+0.1) 0.3
lua print(0.1+0.2) 0.3
C++ std::cout << 0.1+0.2; 0.3
C printf("%f",0.1+0.2); 0.300000

NOTE: i use replit.com for running the program https://replit.com/languages

no qn
0 whether this language really solve the floating point problem or just do some tweaking in background
1 is this because it is using good garbage collector algorithms that clear the buffer
2 are they just round the output
3 are they using good floating point algorithms
4 why only some programming language are slowing this problem
5 is this problem is due to design of programming language
6 how to solve this problem in rust language

can any one give a convincing answer

1 Like

Languages like Go that print "0.3" are rounding the output.

15 Likes

Use Float Toy to look at the representations of the numbers in the different IEEE formats. You'll see that 0.30000000000000004 is the correct answer.

5 Likes

And the "0.3" means you can't round trip the float -> string -> float.

https://play.golang.org/p/_hNWoypcMDx

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=17985a96d04373fe348a5fe2282b1575

5 Likes

Also the NASA answered that #define PI 3.141592653589793 is enough for interplanetary navigation.

7 Likes

Can you please specify what actually is the problem you found? Within the IEEE754 double precision floating point 0.1 + 0.2 is 0.30000000000000004 by specification. And some languages decided to not be very precise on printing them so they just round it into shorter representation. To me all these are just facts and can't see any problem, but different person may find different problems from same facts.

5 Likes
println!("{:.1}", 0.1+0.2);

This prints 0.3

The problem with this approach however is that specifying a bigger precision keeps the trailing zeroes:

println!("{:.5}", 0.1+0.2);

This prints 0.30000

This mainly relates to the fact that 0.1f64+0.2f64!=0.3f64. See e.g. Basic usage - Documentation for float_eq 0.6.0. My guess is that Rust println! (and a couple of other languages) uses a more modern algorithm, that maps every floating point number to one string representation in such a manner that the exact f64 binary representation can be reconstructed. Because 0.1f64+0.2f64 simply isn't the closesed f64 number to 0.3, it is mapped to 0.30000000000000004. Some of the other languages might use a different printing strategy (such as only printing the first 6 digitis or consider e.g. 5 zeros to be an finishing criteria.)

1 Like

I think you may be looking for the decimal crate.

2 Likes
  1. Floating point numbers are the tools of the devil, invented solely to drive good programmers insane.
  2. Like @zackw said, if you need precision, use a crate that offers it for you, e.g., rug or ramp, or any of the other ones that are out there.

Should this be on users.rust-lang.org?

13 Likes

In all of these languages the actual value in memory is exactly 0.30000000000000004, because that's the native representational of the number on the CPU (all CPUs these days have settled on using the same standard for floats).

Some of these languages choose to round the value down when printing it, so they display what you want to see, but not what the value really is.

Try printing (0.1 + 0.2) * 1e17 and you'll see that all of them display the same integer.

https://0.30000000000000004.com

10 Likes

Go is doing something weird here:

package main
import ( "fmt" )
func main() { fmt.Printf("%.30g", 0.1 + 0.2); }

yields 0.299999999999999988897769753748 there. This is actually closer to 0.3. As a wild guess this may be a result of something like constant folding with compiler using better precision internally.

3 Likes

Apparently the website that @kornel linked has the answer:

Go

package main
import "fmt"

func main() {
  fmt.Println(.1 + .2)
  var a float64 = .1
  var b float64 = .2
  fmt.Println(a + b)
  fmt.Printf("%.54f\n", .1 + .2)
}
0.3  
0.30000000000000004  
0.299999999999999988897769753748434595763683319091796875

Go numeric constants have arbitrary precision.

7 Likes

They way I like to explain this is to start by pointing out that there are a finite set of double precision floating point numbers. Given 8 bytes there can't be more than 2^64 different values (in reality it's slightly less than this given not all combinations of bits are valid floating point values).

Given that observation, when we parse a base 10 string representation of a number (e.g., 0.1, 0.2, or 0.3) what we're actually doing is just finding the closest valid floating point value. The thing to keep in mind here is that we can write out two base 10 values that are so close that they are represented by the same exact binary representation in memory. For instance, any number between 1.0 and 1.0 + f64::EPSILON will pick the same internal representation.

The reason this is interesting for the current discussion is that when we go the opposite route and convert from a floating point value to a string there are an infinite number of strings we could return so long as they all end up parsing to the same floating point value. The "old school" way of printing a floating point value that could be parsed back into the same exact floating point value would be to just printf("%0.64f\n", val) which includes enough digits after the decimal place to make this guarantee.

While that works, most folks don't like having to deal with 0.1000000000000000055511151231257827021181583404541015625000000000 when they just wanted 0.1. So the question then becomes, "What's the shortest string I can return that will parse to the given floating point value?". Unsurprisingly, it turns out that this is actually fairly complicated stuff. dtoa.c is the canonical implementation if you're interested.

The last step in understanding why 0.1 + 0.2 doesn't print as 0.3 is to consider the process that is happening. First we pick floating point representations for 0.1 and 0.2 and then add them according to floating point math rules. This gives us the floating point value in hex as 0x3FD3333333333334. However, when we parse 0.3 directly we get 0x3FD3333333333333 which is very close but not the same (N.B., there's no valid floating point value between these two values as they are "one apart").

So when we print the result of 0.1 + 0.2 we show 0.30000000000000004 because it is the shortest possible string that can be parsed into the same exact floating point value. Returning 0.3 would have parsed into a different floating point value so that would be invalid.

So the general answer is that any language that shows 0.30000000000000004 is showing a very precise representation of the floating point value in RAM. Any language that shows fewer digits than 0.30000000000000004 is showing a rounded and/or truncated representation.

no qn
0 whether this language really solve the floating point problem or just do some tweaking in background
Assuming "this language" refers to Rust, its because Rust is giving you a string that represents a specific double precision floating point value. If by "floating point problem" you mean 0.1 + 0.2 != 0.3, its not actually a problem as that's properly implemented IEEE-754 behavior. If you wish to avoid this you want to look into arbitrary precision arithmetic
1 is this because it is using good garbage collector algorithms that clear the buffer
No. This has nothing to do with garbage collection
2 are they just round the output
Any language that is showing fewer digits than 0.30000000000000004 is either rounding or truncating the values they are printing.
3 are they using good floating point algorithms
Any language that is showing fewer digits than 0.30000000000000004 is losing information in the output. Whether losing information subtly is good or not is up to you.
4 why only some programming language are slowing this problem
Different languages made different decisions on how they render floating point values.
5 is this problem is due to design of programming language
No, this behavior is due to a correct implementation of IEEE-754 behaviors
6 how to solve this problem in rust language
I assume you mean "How can I have Rust print 0.3 in this situation?" If that's the case you'll want to either look into arbitrary precision math or round your values to some number of significant digits. See this example.
8 Likes

https://float.exposed

This site is also very nice to toy around with

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.