While it is certainly possible to implement implicit widening without changing the meaning of existing programs by having widening only kick in in situations that are presently errors, I worry that it may make it easier accidentally affect the correctness of code. Consider the following (admittedly contrived) example:
fn func1(_: i32) {}
fn func2(_: i16) {}
let input = "4500";
let mut n = input.parse().unwrap();
// A
func1(n);
n *= 16;
println!("{}", n);
In this example, n is inferred to be i32, because that’s what func1 expects, and the program outputs 72000. Now, what happens if one adds func2(n) at point A without remembering that it takes a narrower type? Today, this causes n to be inferred as an i16, and results in the following error:
error: mismatched types: expected `i32`, found `i16` (expected i32, found i16)
However, with implicit widening, this wouldn’t be an compile-time error. Instead, the code would happily build with n as an i16, implicitly widening the value when calling func1. Some formerly-accepted values would then result in a panic, while others would result in overflow. In this case, without overflow checking, the code would happily and erroneously output 6464.
It may very well be possible to come up with good solutions for the various pitfalls by coming up with good rules for when widening can and can’t kick in, how it can affect inference, and whether it should propagate to the arguments of a widened expression, but I wouldn’t want it included without a lot of thought and testing.
Given that things like polymorphic indexing and heterogeneous comparisons don’t carry nearly as much risk, are ergonomic wins, and are useful even in a world with implicit widening, I still feel we should start there. Given that it wouldn’t affect the meaning of existing programs, implicit widening can be added at a later date if it really turns out to be necessary.