Pre-pre-RFC: `NoDrop` marker trait

Here's an idea that I've wondered about:


In some APIs, it would be nice to force the API user to call some method rather than allowing drop glue to clean up the type. For example, if some work that needs to be done around drop time is fallible and you'd like to force the caller to handle it.

I've personally wanted this for Quinn, where I'd like to "force" the API user to deal with some potentially required follow-up when the caller is done consuming from a receive stream.

Guide-level explanation

core will provide a NoDrop marker trait. If it is implemented for a type, owned values of that type may not go implicitly out of scope. The only way to clean them up would be to destruct the values. Typically this would be done by a method of the type, which allows the API to for example handle fallibility, or that method could be made async.


Since explicitly cleaning up is less ergonomic than implicit drop glue, we would not want this trait to be widely used. However, it seems like a useful increase of expressivity.


This seems related to ManuallyDrop, which I first encountered when implementing the aforementioned Quinn code. ManuallyDrop, as I understand it, prevents the compiler from inserting drop glue, but doesn't provide any guarantees that cleaning up is done in some other way. By contrast, NoDrop would force a caller to still implement some way of cleaning up a type's resources, which seems like an improvement to me.


This must have been discussed before - I would look for linear types as the keyword. Just saying this so that the discussion can start from the conclusions of previous iterations.

How is panic/unwinding handled in this plan, since normally everything is dropped in that case too?


Unwinding is a good point. I can think of two approaches:

  • NoDrop only forces non-panicking code to handle cleanup explicitly. This is easier but might be problematic in that it doesn't force the API user to think about this case as much, and so they might think that dropping will not handle when it actually might.
  • The NoDrop adds a panicked(self) method where the impl is forced to handle that case explicitly. This forces the implementer to consider unwinding, which seems like an improvement, and the considerations that the API implementer wants to force on the caller might not be as important in the panic case. For example, Quinn currently implements Drop for Chunks and would use that implementation in NoDrop::panicked() rather than Drop if this was added.

Additionally, what is done for mem::forget on a value of such a type? Note this bit in those docs which references the limitations of Rust's safety guarantees:

Because forgetting a value is allowed, any unsafe code you write must allow for this possibility. You cannot return a value and expect that the caller will necessarily run the value’s destructor.


I would expect mem::forget() to do the same things as it does on other values. Since its purpose is to bypass drop glue, it seems sensible that forbidding drop glue to run has no effect on it.

1 Like

Couldn't you just panic on drop as a way to force the user to convert to a different type (and thereby force some final processing?)

That makes it a runtime property; the value of "true" linear types is making this a compile time reminder.

Actually, though, that may be a good way to handle pseudo-linear types in Rust: do it as a stronger form of #[must_use] which warns if the value is implicitly dropped at end of scope.

The main problem with linear types is that they don't compose well. How do you handle containers of linear types? In a way it's somewhat similar to pinning; a pinned type in C++ parlance is one with the move constructor (and probably the copy constructor) deleted; a linear type is one with the destructor deleted (or private, I suppose).

In C++, containers "support" exotic types without destructors by just duck typed APIs, and getting template errors when you try to use an API that does something the contained type doesn't support. Rust doesn't have that "luxury," in that it avoids post-monomorohization errors, and should enable/disable non-linear-compatible APIs via generics.

In a best case world, we make every for<T> bound by Drop by default (the way they are Sized, and in this world T : Drop means droppable, not implements Drop explicitly), and every generic needs to consider if it can opt in to ?Drop types.

This is, at best, a lot of work across the entire ecosystem to support a small family of exotic types. And there are other issues with new ?Trait bounds.

The best reference article for the pains of actually linear types is probably Gankra's:

And it's also worth noting that a linear type feature is very likely to try to get used for soundness, as in the type must be dropped, and cannot be forgotten. Whether a design could actually guarantee that is an open question with a likely negative answer, that forgetting to run destructors is always sound for safe code to do.


While I see some appeal in your proposal, as @CAD97 noted, it brings a lot of issues. However, it reminds me a bit of my own question and the related idea of a Close trait. Perhaps, it might be worth extending this idea with a hook that could be used during panicking as you suggested, with the default implementation simply dropping the thing.

1 Like

Random thought: would it be possible to apply the no-panic trick to the drop impl of a type to check whether its drop glue is ever run?

This of course has all the same limitations of no-panic, due to being an error at the linker stage post optimization and codegen.

1 Like

An additional limitation of using the no-panic trick on a drop impl would be that it becomes impossible to turn the value into a trait object as the vtable always references the drop glue.

1 Like

I know most of you are aware of this, but I'll post it nonetheless for reference, especially since it's an implementation which already offers a possible answer to most of the questions regarding panic and whatnot.

With "branding type constructors" (type constructors which manage to yield an instance of a unique type per instantiation), one can use the type-state pattern to ensure that a non-diverging input has properly called the desired destructor (or constructor! This can be used to structurally prove that an out-pointer to a structure is initialized by initializing each and every field).

The root point of a brand, in Rust, is an "anonymous" / unnameable lifetime, either because macro-generated, or because higher-order (callback style):

mod lib {
    type PhantomInvariant<'lt> =
        ::core::marker::PhantomData<fn(&'lt ()) -> &'lt ()>

    struct Id<'brand> /* = */ (

    struct Foo<'brand> {
        // …
        _brand: Id<'brand>,

    impl<'brand> Foo<'brand> {
        fn consume (self: Foo<'brand>)
          -> ProofOfConsumption<'brand>
            // …
    // where
    struct ProofOfConsumption<'brand>(Id<'brand>);

    /// Here is where the magic happens: "must consume"-yielding constructor!
    impl Foo<'_> {
        pub(self) // private
        fn new (/* … */)
          -> Self
            Self {
                // …
                _brand: Id(<_>::default()),

        fn with_new<R> (
            /* …, */
            scope: impl for<'brand> FnOnce(Foo<'brand>) -> (ProofOfConsumption<'brand>, R),
        ) -> R
            scope(Self::new(/* … */)).1

The key is thus that

for</* any */ 'brand> …(Foo<'brand>) -> ProofOfConsumption<'brand> …

required callback: it needs to be a closure whose function body is lifetime-agnostic (otherwise it wouldn't meet the desired higher-order signature), and while doing so, it needs, for each / any input 'brand in Foo, to be able to yield a ProofOfConsumption<'brand>.

This will only be possible if the logic of the given scope / callback, for every possible branch inside it, either .consume()s Foo(), or diverges.

Then, one could always write a Drop impl of Foo<'_>, which would only be called in the panic!-king case: impl Drop for Foo<'_> and impl Foo<'_> { fn consume(self) would thus be the two function bodies handling both cases, with consume being able to fail or whatnot, but the Drop not (and one could even panic! within the Drop impl to abort if the scope panics, should that consume call be otherwise mandatory).

This does have the issue of requiring callbacks, hence not playing super nicely with try blocks, async, or early break, continue, or returns.

This also answers the question of structural combination of such entities, or even that of storing those in a collection: the unique lifetime for each makes the latter impossible (:grimacing:), and the former possible but quite cumbersome:

struct Baz<'foo, 'bar>(Foo<'foo>, Bar<'bar>);

impl Baz {
    fn with_new<R> (
        scope: impl for<'f, 'b> FnOnce(Baz<'f, 'b>) -> (ConsumedBaz<'f, 'b>, R),
    ) -> R
        Foo::with_new(|foo| {
            Bar::with_new(|bar| {
                let baz = Baz(foo, bar);
                let (baz_token, ret) = scope(baz);
                let (foo_token, bar_token) = baz_token.into_parts();
                (bar_token, (foo_token, ret))



no-panic is not correct rust code, and is not even well-formed. It relies on DCE to avoid errors, which is not reliable as it is an optional optimization. The way that I would describe it is Ill-formed, No Diagnostic Required. I would actively discourage attempts to extend this to a generalized no-drop operation


This was discussed before:

1 Like

When I looked at this years ago, I suggested this effort could be significantly helped by a lint. I think automated refactoring may also be possible. It would likely be very irritating across the ecosystem, as folks who maintain generic container crates may be "encouraged" to make their crates "linear"-aware... On the other hand, I don't believe the use cases are as exotic as you suggest. Linear types seem (to me) to potentially make async future cancellation much more intentional (no longer a foot-gun), which is very relevant to how Rust seems to be evolving today.

1 Like

For whatever it's worth (and apologies if I'm stealing anyone's thunder), I just posted something like an updated proposal here: Communicating With Code: Less Painful Linear Types

Wouldn't the escape hatch described in the article nullify any use of linear types for safely exposing unsafe interfaces (like scoped threads/tasks)?

1 Like

I had idea of creating #[finalize] methods: ones that require !ImplicitDrop, take self and are required to be called for a value to be disposed. As the least possible disposing method Drop::drop (+ drop glue) has been considered. Inside of finalize methods all types may be destructured.

Pro: finalize methods can have arguments and return types.
Con: requires adding support on per-item basis.\

Just to share.

I also encourage drop_unwind proposal from Niko, which he wrote somewhere. I'd be nice addition to handle unwind in context of such proposals.

No, in order to use the escape hatch, the Drop implementation needs to honor the linear type constraints. consume takes the value by move, and it's not allowed to fall out of scope in the trait implementation. This means you're more-or-less forced to forward the value to a consuming function defined against the internal value. You're still forced to honor the type author's intent when you adapt the type to be affine.

But Drop::drop takes an &mut self, so there's no instance of your linear type falling out of scope. Moreover you can std::mem::forget the wrapper, thus the linear type can't really rely on it being linear for safety.

1 Like

I have an alternative solution

With this we don't use a marker trait, instead we have a special annotation, so we can write our own marker object (line PhantomPinned).

struct PhantomNoDrop;
impl Drop for PhantomNoDrop {
    fn drop(&mut self) { unreachable!() }

With this any type that in its generated drop clue cannot call this drop implementation otherwise the compiler will complain.