I honestly get the appeal of doing this automatically, but there's still the intent issue, and why we don't allow global type inference for private functions either.
Or more specifically, it's currently possible to type check a function by using the body of the given function, the signatures of all called functions, the definitions of all used structures, and no more. As a key point, type checking a function does not require knowing the bodies of any other function.
Or even more direct of a comparison, it's why we don't allow lifetime inference for non public functions, even though we necessarily have the minimum bound information already.
Global type inference (of which subview inference is a subset) runs into the problem that you have to have the entire crate in (human) memory to understand it. You can't page out the details of the implementation of some helper module, because that implementation has direct impact on how you're allowed to use it.
Then there's of course still semver-like hazards in any large codebase; if I have a subroutine that introduces a log of the object state, I've broken any callers that were relying on me not using some other fields. If I do a temporary change that stops touching a field, then revert it later (as it turns out, bad idea), then anyone in the meantime could've relied on me not touching that field. (Sure, maybe that should've been a crate boundary. Oftentimes it's not, and sometimes it can't be, due to bidirectional dependencies.)
The important part of function signatures being fully specified is developer intent. Signatures give the developer a place to tell the compiler what they want the code's shape to be, which it then uses to help the developer to make sure that 1) the implementation actually fits inside of said shape (fulfils the obligations), and that 2) calling code only relies on the promised shape. Having this layer of developer intent allows for better errors (in the common case when most signatures are mostly accurate to intent) since the compiler knows what the developer wanted as well as what the developer wrote, and can offer hints both in the direction of stronger guarantees or fixing what was written. (Personally, I like avoiding errors that go "because it uses ... that uses ... that uses ... that uses ... that uses ... that uses this field". We have some (autotraits, blanket impl chains), but each for good reason.)
It's interesting that global type inference is kind of halfway between static typing and dynamic typing. Personally, I think Rust's local type inference is the sweet spot.
I'm against global type inference for anything but diagnostics for the reasons above. Diagnostic use is fine (e.g. _
as a return type, rustc will tell you what type you're returning to put there, though that's not actually global inference) and great, because it's the "did you mean" that makes a great peer programmer compiler. The compiler could track this inference and say "hey if you made these borrows take disjoint views it'd compile fine," and I'd rejoice at the compiler getting more helpful. It's the implicit reliance on function bodies for signature clarification that I'm against.
The reason type/borrow/disjoint captures inference works for closures is that they're entirely local. Their inference is still bound by the function definition. The implementation of the closure is clearly part of the implementation of the function it's a part of.
I'd love to see it become easier to lift said closures into full functions. I just don't think that expanding closure inference to currently fully specified function signatures (for successful compiles; I'm all for "allowing" inference in unsuccessful compiles to suggest adding the inferred signature) is a good idea.