if you store a mutable reference (or a mutable raw pointer) in an immutable place, you can still assign to its pointee.
however, because of the signature of DerefMut, this is not the case for any other datatype.
example of this inconsistency:
use std::sync::Mutex;
fn main() {
let mut x = 4;
let y = Mutex::new(5);
let mut z = 7;
let tup = (&mut x, y.lock().unwrap(), &mut z as *mut i32);
let (x, mut y, z) = tup;
*x = 5;
*y = 10;
unsafe { *z = 15; }
}
This is known behavior, and a key contributor to why mut on bindings is sometimes called "just a lint."
My personal mental reasoning for this behavior is that it's a consequence of the auto(de)ref rules; i.e. that given p: &mut _, using p as a method receiver doesn't use p the place, but more accurately uses the place *p (by &mut), which is marked mut.
(The original justification AIUI is that if this weren't the case, let ref mut p = would be no different than let ref p =, and you aren't currently even allowed to write the awkward let mut ref mut p =.)
There are other significant privileges given to references as well, such as projected pattern binding modes and borrow splitting. At a fundamental level, naming the place behind &mut bindings is a different operation (pure place expression) than custom DerefMut (computed place expression).
use std::sync::Mutex;
fn main() {
let mut x = 4;
let y = Mutex::new(5);
let mut z = 7;
let tup = (&mut x, y.lock().unwrap(), &mut z as *mut i32);
*tup.0 = 5;
*tup.1 = 10;
unsafe { *tup.2 = 15; }
}
here the only solution would be using Cell or UnsafeCell.
yes, that's why i said this is because of the signature of DerefMut. if deref_mut took a &self receiver instead, the receiver would be coerced to a shared reference instead, and there would be no error.
What you'd really want is for MutexGuard to implement DerefUnique (that doesn't exist) rather than DerefMut. DerefUnique would take self as a unique, but not mutable, reference (which also doesn't exist in Rust, but could exist for completeness).