What I'm wondering is if there is sufficient consensus and/or documentation around smart pointer (and containers) to guarantee that a downcast like this is correct:
use std::sync::Arc;
struct Struct;
trait Trait {}
impl Trait for Struct {}
fn main() {
let a: Arc<Struct> = Arc::new(Struct);
let b: Arc<dyn Trait> = a.clone();
let b: Arc<Struct> = unsafe { Arc::from_raw(Arc::into_raw(b).cast()) };
}
In practice the inner layout (the value behind the pointer) of the arc seems to be identical across the two types. The primary difference would be how the immediate Arc<T>
type is represented (wide vs thin) and therefore how the interior value is dropped. If the trait object is dropped last the underlying value would be dropped through its vtable. If the concrete type is dropped last the drop implementation for Struct
would be called directly. In practice both are implemented with ptr::drop_in_place
. Note that these behaviors are already in place today due to unsized coercion.
My questions are:
- Is this currently correct?
- If so, is there stable documentation (even if incomplete) which supports such use?
- If not, what would be the minimal changes needed to make it correct?
E.g. would smart pointers have to specify that the interior representation is identical or does something imply this (like the existence of a CoerceUnsized
impl)?
Note that this is apart from the blessed Arc<dyn Any>
, all though this might be a factor for arguing why unsafe downcasts are correct.
Note on dropping trait objects
Playground where you can see how drop works for each type.
With Arc<Struct>
dropped last:
{ fn: "core::ptr::drop_in_place<playground::Struct>" },
{ fn: "alloc::sync::Arc<T,A>::drop_slow" }
With Arc<dyn Trait>
dropped last:
{ fn: "core::ptr::drop_in_place<playground::Struct>" },
{ fn: "core::ptr::drop_in_place<dyn playground::Trait>" },
{ fn: "alloc::sync::Arc<T,A>::drop_slow" }