I can see why - I think to be honest this is more of a weakness of the syntax of lifetime parameters, because you can’t see that anything has been elided. I find fn foo(&self) -> &str, for example, much less opaque because I can see where the missing lifetime would go. (The fact that it shares the sigil with self also helps).
But this is less likely to be an issue with this ellision, I feel, because it doesn’t bind the elided lifetime to any lifetime in particular.
The use case in which I’m seeing this pop a lot is a trait defined like this:
trait Foo<T: Bar> {
fn foo(&mut self, &T);
}
trait Bar { ... }
My library provides implementations of Bar, you provide implementations of Foo, which you pass to some function, and then I manipulate your Foo with a Bar.
Several of my Bar implementations have lifetime parameters (some more than one), so I end up with functions like:
fn baz<T>(foo: T) where T: for<'a, 'b> Foo<BarImpl<'a, 'b>>
(The real signatures have even more elements). The reason I explain this is that the trait itself isn’t parameterized by a lifetime (I agree that this is very rare), but that its parameterized by a type carrying a lifetime.
Of course what I’d really like here is better impl traits, so I could instead define it as:
fn baz<T>(foo: T) where T: Foo<impl Bar>