If the default includes any auto-trait, the behavior of RPIT-in-traits should probably be “leak parameter autotraits”. As an example, consider the minimal async function:
fn echo<T>(t: T) -> impl Future<Output=T> { async { t } }
trait Echo<T> {
fn echo(t: T) -> impl Future<Output=T>;
}
impl<T> Echo<T> for () {
fn echo(t: T) -> impl Future<Output=T> { async { t } }
}
If it weren’t for the fact I firmly believe that RPIT and async fn
should have the same auto-trait inference behavior, I’d potentially argue for this “parameter auto-trait leakage” to be the behavior of all async fn
. (That is, iff a struct holding all the parameters would fulfill the auto trait, the auto trait is required & exposed for the return value.) This makes the inference local to the function header. I’m almost willing to argue that non-trait RPIT should behave that way, with impl Trait + ?AutoTrait
to opt out, even though that would have to wait until the 2021 edition at the earliest.
Actually, the “utility” of the autotrait leakage is conditional autotrait implementation, which must depend on the arguments in some fashion. Given all-parameter auto-trait inference, the only time you’d have to fall back to manual newtypes (that can have arbitrarily complex trait impl predicates) for just auto-traits is if auto-trait implementation depended on not all parameters and the other parameters may not implement the auto-trait.
I already had a gut feeling that auto trait leakage through impl Trait
(a feature introduced to easily restrict the API promised to the caller) was a misfeature. I’ve now convinced myself that the above auto-trait inference mirrors how we treat lifetime inference (in async fn
moreso than free fn, as we capture all instead of only if unique) and would be more desirable if we could redo things.
I firmly believe that async
and the impl Trait
desugaring (fn -> impl Future + 'all
) should behave the same in terms of auto-trait inference. If we decide not to hold that position that they should be the same in all positions (leak in non-trait, no bound in trait), we can potentially make async
be special by enforcing this everywhere, or make this only apply in RPIT/async
in traits. If we decide to apply it to traits, however, we should probably see a migration path for non-trait usages: warn quickly for cases that don’t meet the weaker inference, and upgrade to an error with the next edition.
I have a hard time pushing any of those paths, however. Auto-trait “inference-from-parameters” feels better than “leakage-from-implementation”, but not enough better to warrant edition breakage. If we did want to push the migration, though, sooner would be better.
(As a side note, even a minimal impl Debug
for async fn
futures would probably be super valuable.)