Yes, the ideal solution is to introduce a new guarantee: safe rust code will either run the destructor or leak the memory when “forgetting” a value, so that the number of “live” values always fits into usize
(except when sizeof(Value) == 0
).
But mem::forget
is being used by unsafe code that needs to avoid Drop being called (e.g. because ownership of the underlying resource is moved from Rust to C code). Just last week I deleted the unsafe
block around one of those calls, since it was producing the “unnecessary unsafe block” warning.
@glaebhoerl: I don’t think silently breaking my code by introducing an artificial memory leak in mem::forget
is the solution here – it’s strictly worse than breaking my code by marking mem::forget
as unsafe and making me add back the unsafe block.
In fact, grepping for mem::forget
in vec.rs
is quite instructive in seeing the valid use-cases of mem::forget
:
- Forgetting
self
in order to suppress theDrop
impl and transferring ownership (e.g.Vec::into_iter()
) - Forgetting a value after making an unsafe copy using
ptr::read
- Forgetting zero-sized values in
Vec::push
, since they can be re-created from thin air later.
We’d need to formulate a guarantee that types like Rc
can rely on while still allowing all of those uses in unsafe code.
So I see two main solutions here:
-
Introduce the new guarantee, and make
mem::forget
unsafe. This is a breaking change, but how much code would really break? I’d expect most callers to still have theunsafe
block aroundmem::forget
calls, since it hasn’t been safe for all that long. On the other hand, a new guarantee means it’s one more thing to check when reviewing unsafe code. Writing unsafe Rust code is already more difficult than writing C code due to the number of (often poorly-documented) invariants you need to uphold. How much more do we want to add to this burden? -
Make
Rc
check for overflows. The expected performance cost on 64-bit platforms is extremely low / often zero – we just have to inhibit some optimizations to prevent overflows from being possible. On 32-bit there’s going to be a real cost, though probably not as bad as I initially thought.
If this were pre-1.0, I’d prefer option 1. Not sure now… this is a safety hole, and we’ve reserved the right to make breaking changes in these cases… but on the other hand, a fix without breaking changes should be the preferred solution in post-1.0 world. I guess we need more data to decide properly:
a) How much code is going to break if we make mem::forget
unsafe again?
b) How big is the impact of overflow checks, both on performance and code-size?