Honestly, I think that the C++ convention is silly. AFAICT, it’s predicated on two things:
- C++'s disastrous modularization story (e.g.
#include order affecting compilation), which means the standard has a legitimate need to reserve the best suffix real-estate for itself.
- The idea that the leading underscore lets readers determine if a literal is standard or not. In Rust, standard literals are all names of primitive types, so at the point that you are using custom literals, this is not a readability win and just Hungarian noise. It’s considered good style (AFAIK) to write
0xmylonglonglonghex_u64 anyways.
Rust should treat integer conversion literals and custom literals the same. If we went with my trait-and-opaque-type proposal, we’d add e.g. the following to one of the numeric modules in core:
impl IntLit for i32 {
type Output = i32;
const fn int_lit(x: i__) -> i32 {
x as i32 // explicit cast, though technically
// unnecessary accoring to my proposal for
// for i__ and friends
}
}
(Of course, this exists for much the same reason that Add and such are implemented for i32 and friends. It’s only a formality to make the trait system happy, since i32 comes with + as a builtin and not by virtue of any trait.)
Naturally, since the primitive types are already introduced into all scopes (not as part of the core prelude), it is natural to expect we could shadow their literals with your own (though I think clippy should throw a fit over this) just as you can shadow all of the primitive types. C++'s distinction is, after all, at the grammar level, since you’re allowed to write e.g. operator "" k, which will get you a warning about how this function can’t be called.
Not only does it put that syntax on equal footing with all other literals, as opposed to enshrining primitive types as more special than they practically need to be, but it lets you do hilarious things like pretend you’re writing pre-1.0 Rust:
type uint = usize;
let k = 0uint;