I propose the following solution to the i__ input type problem:
// marker
trait LiteralInput {}
impl LiteralInput for i32 {}
//impl LiteralInput for u32, i64, ..., f32, f64, &'static str, ...
// possibly add big integer literals in some form in the future
trait Literal<T: LiteralInput> {
type Output;
/*const*/ fn apply_literal(input: T) -> Self::Output;
}
struct Meter(i32);
enum m {}
impl Literal<i32> for m {
type Output = Meter;
/*const*/ fn apply_literal(input: i32) -> Meter {
Meter(input)
}
}
With this, a custom literal can use any of the available input types, and it’s possible to support more input types (e.g. i256 or [u8; N]) in the future if needed. If the custom literal requires i32, the compiler will try to interpret the preceeding integer literal as i32 (and fail if it’s not a valid i32 literal).
It also will be possible to implement the trait for multiple input types, e.g. impl Literal<i32> for m and impl Literal<i64> for m. This is useful if multiple input types makes sense for this literal (for example, it may even produce a generic Meter<T: Num>). The compiler will pick a suitable implementation for invokation. This can work in a very similar way to how operator traits like Add<Rhs> or Index<Idx> work now.
If there are multiple implementations and the supplied literal can be interpreted as any of them, it’s not clear what the compiler chooses. It’s not very good but it’s the same situation we already have with normal integer literals. You may not know what type is selected for vec![1] if the context doesn’t place additional limitations. The type inference should do its magic. If the context requires a Meter<i64>, the impl Literal<i64> for m will be chosen over impl Literal<i32> for m.