The reference counts in RcBox
are of type usize
, and creating a new Rc
increments it with regular addition. In optimized code, where overflow checks are removed, it is possible to overflow this count, allowing a use-after-free.
In the old world where values supposedly could not be forgotten (without running the destructor) without actually leaking memory, it would be impossible for this to wrap around: doing so requires 2^X Rc
s where X is the pointer width, ergo sizeof(Rc) * 2^X
bytes, which obviously cannot be allocated in a X-bit address space, so allocations would start failing before the count overflowed.
But now we have safe mem::forget
, and can do this:
use std::rc::Rc;
fn main() {
let val = 123;
let ref1: Rc<&u32> = Rc::new(&val);
{
let _ref2 = ref1.clone();
for i in 0u64..4294967295 {
if i % 10000000 == 0 {
println!("{}", i);
}
std::mem::forget(ref1.clone());
}
} // drop ref2
// attempt to reallocate the memory used for the &u32 as an integer
let _reuse_plz: Vec<Rc<u32>> = (100u32..1000).map(|x| Rc::new(x)).collect();
println!("{}", *ref1); // segfault!
}
As written, this only works on 32-bit targets: even with -C lto -O
, the conditional print seems to force LLVM to actually count one by one (as opposed to by ten millions), which takes about 30 seconds on my VM. I estimate that counting to 2^64 at that rate would take about 3,800 years. If you remove the print, LLVM realizes it can skip the loop and segfaults immediately on both 32-bit and (changing the iteration count) 64-bit targets.
Since it takes so long to actually count to 2^64, on 64-bit targets this issue can only be exploited by code actively trying to trigger the LLVM optimization, as opposed to a user exploiting a buggy Rust application. In the 32-bit case, while the chance is slim, I think it’s not outside of the bounds of possibility that someone would write this pattern into their application by mistake, allowing it to be exploited (in much longer than 30 seconds though).