Can you please specify what actually is the problem you found? Within the IEEE754 double precision floating point 0.1 + 0.2 is 0.30000000000000004 by specification. And some languages decided to not be very precise on printing them so they just round it into shorter representation. To me all these are just facts and can't see any problem, but different person may find different problems from same facts.
This mainly relates to the fact that 0.1f64+0.2f64!=0.3f64. See e.g. Basic usage - Documentation for float_eq 0.6.0. My guess is that Rust println! (and a couple of other languages) uses a more modern algorithm, that maps every floating point number to one string representation in such a manner that the exact f64 binary representation can be reconstructed. Because 0.1f64+0.2f64 simply isn't the closesed f64 number to 0.3, it is mapped to 0.30000000000000004. Some of the other languages might use a different printing strategy (such as only printing the first 6 digitis or consider e.g. 5 zeros to be an finishing criteria.)
In all of these languages the actual value in memory is exactly 0.30000000000000004, because that's the native representational of the number on the CPU (all CPUs these days have settled on using the same standard for floats).
Some of these languages choose to round the value down when printing it, so they display what you want to see, but not what the value really is.
Try printing (0.1 + 0.2) * 1e17 and you'll see that all of them display the same integer.
yields 0.299999999999999988897769753748 there. This is actually closer to 0.3. As a wild guess this may be a result of something like constant folding with compiler using better precision internally.
They way I like to explain this is to start by pointing out that there are a finite set of double precision floating point numbers. Given 8 bytes there can't be more than 2^64 different values (in reality it's slightly less than this given not all combinations of bits are valid floating point values).
Given that observation, when we parse a base 10 string representation of a number (e.g., 0.1, 0.2, or 0.3) what we're actually doing is just finding the closest valid floating point value. The thing to keep in mind here is that we can write out two base 10 values that are so close that they are represented by the same exact binary representation in memory. For instance, any number between 1.0 and 1.0 + f64::EPSILON will pick the same internal representation.
The reason this is interesting for the current discussion is that when we go the opposite route and convert from a floating point value to a string there are an infinite number of strings we could return so long as they all end up parsing to the same floating point value. The "old school" way of printing a floating point value that could be parsed back into the same exact floating point value would be to just printf("%0.64f\n", val) which includes enough digits after the decimal place to make this guarantee.
While that works, most folks don't like having to deal with 0.1000000000000000055511151231257827021181583404541015625000000000 when they just wanted 0.1. So the question then becomes, "What's the shortest string I can return that will parse to the given floating point value?". Unsurprisingly, it turns out that this is actually fairly complicated stuff. dtoa.c is the canonical implementation if you're interested.
The last step in understanding why 0.1 + 0.2 doesn't print as 0.3 is to consider the process that is happening. First we pick floating point representations for 0.1 and 0.2 and then add them according to floating point math rules. This gives us the floating point value in hex as 0x3FD3333333333334. However, when we parse 0.3 directly we get 0x3FD3333333333333 which is very close but not the same (N.B., there's no valid floating point value between these two values as they are "one apart").
So when we print the result of 0.1 + 0.2 we show 0.30000000000000004 because it is the shortest possible string that can be parsed into the same exact floating point value. Returning 0.3 would have parsed into a different floating point value so that would be invalid.
So the general answer is that any language that shows 0.30000000000000004 is showing a very precise representation of the floating point value in RAM. Any language that shows fewer digits than 0.30000000000000004 is showing a rounded and/or truncated representation.
no
qn
0
whether this language really solve the floating point problem or just do some tweaking in background
Assuming "this language" refers to Rust, its because Rust is giving you a string that represents a specific double precision floating point value. If by "floating point problem" you mean 0.1 + 0.2 != 0.3, its not actually a problem as that's properly implemented IEEE-754 behavior. If you wish to avoid this you want to look into arbitrary precision arithmetic
1
is this because it is using good garbage collector algorithms that clear the buffer
No. This has nothing to do with garbage collection
2
are they just round the output
Any language that is showing fewer digits than 0.30000000000000004 is either rounding or truncating the values they are printing.
3
are they using good floating point algorithms
Any language that is showing fewer digits than 0.30000000000000004 is losing information in the output. Whether losing information subtly is good or not is up to you.
4
why only some programming language are slowing this problem
Different languages made different decisions on how they render floating point values.
5
is this problem is due to design of programming language
No, this behavior is due to a correct implementation of IEEE-754 behaviors
6
how to solve this problem in rust language
I assume you mean "How can I have Rust print 0.3 in this situation?" If that's the case you'll want to either look into arbitrary precision math or round your values to some number of significant digits. See this example.