Without auto-ref we could still have &mut vec <- x, or plain vec <- x if vec is actually a &mut Vec (not at all rare). But yes, auto-ref is a big win. I don’t see what can of worms it opens, other than the current implementation (HIR expansion) being unsuitable — but at least for box, the implementation may already change for other reasons (type inference) as outlined in the OP.
And it’s not just, and not primarily, about the number of characters typed. In my and other people’s vision, <- would be the default, the most simple and aesthetic way of inserting in a collection: the <- syntax is visually evocative, it’s the same for different collections (Vec::push vs. HashSet::insert), and it has less visual noise than the alternatives (no parentheses, no || prefix).
While reading this, it occurred to me that the closure form actually isn’t equivalent in the number of moves! In the following example (after whatever temporaries and moves are involved in try! or ?), the payload of the Ok variant is moved into the local arg, then moved into the closure object, and then finally moved into the Vec's backing buffer.
let arg = try!(compute_argument());
Contrast this with the <- equivalent (vec <- try!(arg);), which does indeed need a temporary for the Result just like the above code, but in the Ok case directly (again, after whatever happens in the try!) copies the payload into the Vec's memory. And despite the closure being quite inlinable, the move from the Result temporary into the local occurs before the memory allocation, so we can’t be sure it gets reordered. (Not to mention that the closure isn’t 100% guaranteed to be inlined, the fact that LLVM has been and still is less-than-stellar about optimizing memcpys, and a million other minor wrinkles.)
So there’s that. But even if it were fully equivalent, I don’t think we’d want the error that the closure implies. That would assume people only use placement (in whatever form) when they absolutely positively need the optimization and can not tolerate anything less. This is antithetical to the vision of placement becoming the default syntax — it doesn’t always have to be always faster (the ? case needs a temporary regardless of how you approach it), it just has to be always at least as fast as the alternatives (Vec::push and your hypothetical Vec::emplace).
Besides, Rust makes a point of being relatively explicit about costs, if you know what to look for, but it doesn’t go out of its way to actively penalize slight inefficencies. If someone does not know, or does not care, that a ? expression involves an extra temporary, forcing them to move the ? out of the closure doesn’t help them, it just force them to write uglier code and prolong the “fight the compiler” phase. Furthermore, even if we wanted to highlight these situations, a lint could do the job just as well, but one could silence it.
Neither &move nor “out-pointers” are really ergonomic without being parametrized on the allocation and owning it.
IOW, the Place types in the RFC are one of the few realistic versions of out-pointers for Rust (you could also imagine having a single type with a generic parameter, but the two are mostly equivalent), there just is no Place type provided for stack slots, only everything else.
That makes p <- x what would otherwise be *p = x.
I really don’t like vec <- x because it seems confusing and ambiguous to me. For example, with a vec, you usually want to push to the back, but for a deque, you can efficiently append to either end. And it’s not like you can’t insert into the beginning of a vec either, it’s just slower than inserting to the back. So it’s not immediately clear what vec <-x even means.
Apart from that, Go uses <- for an unrelated operation (sending and receiving on a channel), so that would increase confusion as well.
Are there any updates on this? Is box syntax dependent on the rest of the functionality here, or would it be possible to stabilise box as it works in nightly before having the rest of the details sorted?
One problem with stabilizing box as it works in nightly: It might prematurely stabilize a strong connection between box <expr> and Box<T> (for an expr: T), which in turn might make it difficult to generalize box in the future to other container types in a backward-compatible fashion.
Basically, we’ve been holding off stabilizing box until we make more progress on generalizing box (or, alternatively, deciding that generalizing box is not worth the effort/cost…)
Would it be an idea to special case Box::new(x) to simply act the same as the current unstable box x while this is not stabilised? That’s what the Box::new function contains anyhow, but it doesn’t seem to always inline properly. I don’t think there are any ways of guaranteeing heap allocations without temporaries on stable at the moment without delving into unsafe, especially on lower optimisation levels.
Consider MIR output (current rustc doesn’t really do any optimisation on the mir level at the moment except cleaning up &*, so debug and nightly is similar) of this code:
My suggestion is that the compiler could emit MIR similar to the one emitted by box x when it encounters Box::new(x) rather than emitting the normal function call mir (ie. treat it as if it encountered box x. This lets LLVM optimise the function much better, if you look at the assembly output of the same functions. If you look at the MIR for the Box::new version it creates a temporary array on the stack (an possibly another one inside the function call). I don’t know the compiler internals well enough to say the exact details of how this could be implemented.
Another temporary alternative could maybe be a boxed! macro similar to vec!.