Are memory leaks considered to violate memory safety?

According to the reference, “Leaks due to reference count cycles, even in the global heap” is not unsafe. However, that doesn’t say anything about other types of memory leaks.

Should memory leaks due to other problems (such as a closure that violates a contract being passed by the user) be considered to violate memory safety?

I am divided on this issue.

I just want to point out that I want a clear definition of what is meant by “memory leak” here. I.e., is a chain of Boxs, reachable from the stack but never read from, to be classified as a memory leak?

  • An experienced C++ or Java programmer would probably say “no, that is just bad coding: you should be ensuring that you free that memory.” And a Rust programmer would probably say the same thing.

  • But when you look at this from the point-of-view of the client of the badly behaved application or library, such a chain of Box's is indistinguishable from unsafe code that forgets to call free().

  • So what does it accomplish to treat the forgotten-free as a case of memory unsafety, but not the chain-of-boxes?

    • (The only advantage I can immediately see is that memory introspection tools (that e.g. trace objects the same way a GC would) might be far more helpful in debugging the chain-of-boxes than they would the forgotten-free.)

One thing that does violate memory safety is a memory leak that fails to call a destructor.

  let guard = foo.get_manipulator_that_cleans_up_on_drop();
  mem::forget(guard); // This is marked unsafe for a reason
foo.do_stuff(); // oh no I'm in an inconsistent state
1 Like

Regardless I’d say managing to leak !Drop types or leaking after calling the destructor are 100% safe. The fact that most code that could leak is using non-copy types (or generic) suggests to me that one can take it as an axiom that “it is quite unlikely that safe code will leak”.

One example where it might be “nice” to be able to safely leak is an interaction with local custom allocators. So say you have a Tree<T, LocalAllocator>, where LocalAllocator is a type which has many instances (not a singleton like a global allocator) and your Tree is implemented with Boxes. Then if you can’t leak, each Box must contain a pointer to the allocator that allocated it. Because if your application panics, all the Boxes need to be free’d by the appropriate allocator (which is only known at runtime). This is however inefficient for the Tree. It uses a single allocator instance for all its nodes. We should be able to use this fact to avoid an entire extra pointer for every node.

Therefore we define a Mox<T, Allocator> type. Mox has the semantics of Box, but has to be manually free’d by passing it its allocator. However it still uniquely owns its data like Box. Given this design, we have two options for a Drop implementation:

// abort, we can't not leak memory (destructor bomb)
fn drop(&mut self) { abort!() }
// leak memory, call destructor
fn drop(&mut self) { ptr::read(self.ptr) }

Personally, I prefer the memory leak, because it means that your Tree doesn’t kill everything in a panic. Also, in the case of a local allocator, all the memory will probably be cleaned up very soon anyway. If there’s some kind of system that allows an object to ask for a global allocator instance, then this can potentially also free the memory if the Mox is handed a global allocator.

Imagine this case:

fn foo() -> ! { loop {} }

fn bar() {
    let leaky = box true;

fn baz() -> ! {
    let leaky = box true;
    loop {}

The leaky variable is guaranteed to never get dropped in both cases, If memory leaks are considered unsafe, should this be considered unsafe? I think the world is a better place if memory leaks are not considered unsafe.

@Thiez IMHO Non-terminating functions do not qualify as memory leaks.

That said, I am not sure about functions that panic!()s. I don’t really know the details of unwinding.

Talking strictly about memory leaks, the worst that can happen is that a program continually requests more memory in some kind of loop. @pnkfelix points out an interesting case of wasted resources, but unless this happens recursively the consequences are unlikely to be very bad (detecting such cases would allow an interesting optimisation, however).

My opinion is that memory safety is about avoiding undefined behaviour and stack/memory corruption.

Talking about safety, combining @Gankra’s point about not calling the destructor with @Thiez’s no-exit functions could be an issue (though not a memory one, and not one which is solvable in the general case).

bugs are "bad coding" also. OOM-death with swapping is almost worst thing that can happen, much worse than usual failure of process.

I was not claiming such a scenario is desirable.

Deadlocks, fork-bombs, filling up the file-system, etc are all bad things that can happen.

I am nonetheless trying to figure out where we are drawing the line in terms of what we classify as “memory safety bugs.”

I think we have to rethink this, since there has been an actual case where memory leaks caused a use-after free.

@theme the problem isn’t just a memory leak. The problem is specifically claiming that you called a destructor and then failing to.


A memory leak is a bug, but not in-and-of itself unsafe. If you take the definition of “memory leak” as “unreachable memory not marked as free”, then it pretty much can’t be unsafe. I mean you can’t access the leaked memory, if you could it wouldn’t be a memory leak, so no memory-safety related problems can occur.

There are cases where not running the destructor is unsafe, but that isn’t because of a memory leak. A type might not allocate memory anywhere and cause issues if it’s destructor isn’t run.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.