I believe this is correct default. This avoids overflow errors, but also lets people declare their i32 types if they need them.
The default type for division results should be rational (I mean represented by two bigints in a ratio to be exact). That way you can represent invalid values (1000/0) as error values without panicking. When they are printed they could be “not a number” or whatever you want to do with that.
That means that you can do any simple arithmetic without overflowing or getting weird results. This would prevent more logic errors for people who are writing application code. If you want to write fast code with exact sizes like f64, you’re welcome to do that. If you want to do square roots, you can get your result in some f64 type or whatever you want.
But the default fallback type should really be fool-proof. You don’t want space shuttles crashing because of integer overflows. It costs you almost nothing when the numbers are small. You get correct results when they are big. Seems like a win-win. And if you need to overflow for hashing, again, you have your i32/i64 types.
What do you guys think?