I think your summary was roughly accurate, yes. And yes, we have to do a trait lookup no matter what – but the way that fallback works is that we first resolve all outstanding trait references as best we can, then we do fallback, then repeat the process. I feel uncomfortable about fallback and type inference as it is; injecting further trait lookups into the process seems to me to be over the top. Imagine for example if the types may involve other variables that are potentially being inferred:
fn foo<T=u32,U=SomeType<T>>(...)
Now support that neither T nor U are constrained, so we are weighing fallbacks to u32 and SomeType<$T> (where $T is the type variable we created to represent T). I guess we can apply our usual procedure of testing whether a trait match could possibly succeed to decide whether SomeType<$T> might be an integral type, but you can see the confusion.
Of course similar situations can arise even today:
fn foo<T=u64,U=T>(x: &[T], y: U) { }
fn main() { foo(&[], 44); }
Here there are four type variables, let’s call them $T, $U, $22, and $44. $U is unified with $44. In order to detect ambiguous cases, we do not apply fallbacks in any particular order but rather consider all cases simultaneously. So we could have to examine the fallback for $U and we would find that it is $T (which is ununified). We don’t yet know what $T will become, so it’s tricky to know whether to apply the fallback. (In this case, since $T will ultimately become u64, we probably should, but it could as well be char). I think though this is not irreconcilable: we can presumably trace $T to its fallback(s) (or, if it is unified, to the type it is unified with) and try to reach a decision that way. Just complicated.