Two points I'm wondering about: Why would it need to introduce a block? How good is the comparison to Python given that it's with
primarily serves the roles of destructors (that get invoked in exceptions) and that it's ExitStack
actively allows retro-actively deciding against dropping as well as dropping before scope end?
The insistence of differentiating between block vs. scope is due to Rust and its interesting and surprisingly, refreshing take where blocks primarily return values; they are not inherently the start and end of lifetime scopes since non-lexical lifetimes were enabled. Tying semantics to a block only really makes sense, imho, if there is something inherently semantically important about the return value. Which doesn't seem to be the case and, in my experience, the restriction to one return type is the one annoying thing about the scope construction—the other being that it's non-orthogonal to control flow.
From a pure design persective, what kind of guarantee / invariant are we promising in the first place? It seems to narrow down to: 'this guard object's Drop is called before some lifetime 'a
ends'. That can actually be encoded in a type:
// Note: correct lt-variance is non-trivial
struct IsDropped<'a, T> { inner: T, _scope: PhantomData<fn(&'a ())> }
impl IsDropped<'a, T> {
/// Safety: Caller promises to drop the returned value before
/// the lifetime ends, by any means they wish.
pub unsafe fn new(val: T) -> Self {
IsDropped { inner, _scope: PhantomData }
}
/// Safe, but potentially useless constructor:
pub fn with_static(val: T) -> IsDropped<'static, T> {
IsDropped { inner, _scope: PhantomData }
}
}
// Edit: there was DerefMut which definitely is _not_ sound.
// TBD: unsafe methods to acccess such as &mut IsDropped -> Pin<&mut _>
Such a type would have the advantage that the invariance can be observed by other code. If, at any point, we receive a reference to such an instance then we can rely on the sequencing of the scope to the given Drop. In particular, a guard might allow scheduling a callback into the guard (aka. ExitStack::callback) and the caller is able to rely on it being called before end-of-scope. Or, conversely, if IsDroppped
is a receiver type then the type T
can provide methods that do so.
The following API should be enabled by such a type:
impl std::thread::Scope {
// Via `with_static`, this safely subsumes `std::thread::spawn`.
pub fn spawn(self: &IsDropped<'env, Scope>, …)
}
Now, how can we actually use this? The first comes to mind would be a simple macro like pin!
; but I'm almost certain there's some other way that I'm overlooking on this first take.
Try it out here
Usage, tl;dr this does not compile:
let value = Scope;
is_dropped!(value);
{
let test = ();
Scope::spawn(value, &test);
}
Here's the macro magic to make it work:
is_dropped!(value);
// … expand to something like:
struct InnerScope<T>(PhantomData<fn(T)>);
impl<T> InnerScope<T> {
fn bind(&self, _: T) {}
}
impl<T> Drop for InnerScope<T> { fn drop(&mut self) {} }
let scope = ();
// This having Drop forces it to be dropped lexically, i.e. at end of scope.
// Which requires `scope` to be alive at that point.
// Which implies scope outlives `$value`.
let scope_guard = InnerScope(core::marker::PhantomData);
scope_guard.bind(&scope);
// Hide the name as a `&mut _`.
// which makes it impossible to move and thus forget.
let ref mut $value = unsafe { IsDropped::new($value) };
fn unify_lt<'env, T>(
_: &mut IsDropped<'env, T>,
_: &InnerScope<&'env()>,
) {}
unify_lt($value, &scope_guard);