Proposal: eliminate wording "memory safety" and "thread safety"


I think these wordings are question-begging, as their uses on, say, HN, quickly degenerate to “yes, leaks are memory safe”, “yes, deadlocks are thread safe”, and others.

I found that casual readers get the impression like, “sure, you can prove anything if you redefine terms to whatever you want”. This is still true even if we have the fixed definition of these terms which we don’t change willy-nilly, because terms and definitions clearly seem non-congruent, and claiming congruence seems absurd.

I think it is better to focus on why these terms are defined that way. Say, we prevent arbitrary code execution. Or maintain control flow integrity. Or whatever.


Without any details, my reaction is basically the same as Let's reword front page claims

I think the same applies to “memory safety”, and for both terms I’ve simply never heard of any alternative term that comes anywhere close to being as widely understood as these. While they are also widely misunderstood, I don’t think banning the terms without a better alternative would really help, especially since they’re not specific to Rust, and we’d have to explain the concepts in all the same places we do today no matter what term we used for them.

So let’s get some details. What are you suggesting we eliminate these terms from? The Book? All official docs? Do you have any specific excerpts from those documents in mind and how you’d prefer to rewrite them?


I found that while there is no alternative technical definition, colloquial definition is really some approximation of “freedom from memory-related bugs” and “freedom from thread-related bugs”. So “memory-related bug (say leak) does not contradict freedom from memory-related bugs (aka memory safety)!” just sounds crazy. I think ignoring this, which you also admit is widely spread, is not wise.

I will propose some concrete rewrites in diff form in near future. The reason I didn’t do that in the first place is that I got the impression that having some rough discussion is preferred before such diffs.


Maybe, the better replacement is “memory-related non-UB garantee” and “thread-related non-UB garantee”.

For example, hanging pointer is one kind of memory-related UB: you don’t really what would happen if you access a hanging pointer, it can be a CPU level exception, a vaild pointer to something you don’t want, or whatever.

Leak is not this kind of UB. Leaking is garanteed to increase your resource usage, but other than that it does not make your program unpredictable. Your program is still in a well defined status, including the error that when you try to allocate resource but none available anymore.

In the same meaning, race condition is UB: you don’t really know what you would get from reading/writing the location. As your invariants are violated, your program state is not predictable.

On the other hand, deadlock is not UB. When it happens you know the thread is blocking forever, but otherwise your program is in a well defined status, you just may not be able to figure out what this status is.

So yes, if there is a term to replace the word “safety” it is “non-UB garantee”.


I think “memory safety” by definition encompasses those things, and, if anything, Rust has a stronger definition of memory safety (data-race safety) than the colloquial definition.

Perhaps it’s just my background as a security professional, but I find myself constantly reading articles like these:

Preventing an attacker from taking over your computer because a programmer miscalculated the size of a buffer or a location within it seems like the table stakes of “memory safety” to me.

“Thread safety” I’m a lot more on the fence about (or worse its even more boastful incarnation “fearless concurrency”). While the static checks Rust provides to eliminate data races are great, multithreaded programs are still fraught with errors. I’m not sure “fearless concurrency” applies in a language where PoisonError exists, for example, or deadlock potential.


Excuse me? That’s an extreme exaggeration when it comes to Rust’s claims of safety.

Nobody “redefines” anything (especially not maliciously or in an intentionally misleading manner), let alone redefining terms to “whatever they want”.

Memory leaks and deadlocks do not unto themselves constitute unsafety. They are often undesirable and they are often bugs, but Rust does not claim to be able to prevent all bugs (which would be impossible). In particular:

  • if there is a memory leak in a program, it can continue running normally and it can produce 100% correct results. Unless, of course, it runs out of memory if the memory leak is repeated; but in that case it will halt. It won’t produce erroneous results, it won’t access memory for which it doesn’t have the rights to access, and it won’t reinterpret memory of the wrong type (just to name a few common ways of violating memory safety), only because of a memory leak.
  • if there is a deadlock, part of the program or the whole program can diverge, ceasing to produce any further results or interaction. Again, this in itself doesn’t cause any incorrect results (that a race condition could produce), but admittedly it is worse than a memory leak in that it’s immediately disruptive.

Yes, these can be a starting point of escalation to worse bugs, and even lead to security or secrecy being broken or worse (I could certainly imagine e.g. a timing attack based on a deadlock, or a life-sustaining device stopping because it ran out of memory). However, the technical terms of “memory safety” and “thread safety” aren’t concerned with every possible transitive outcome of such phenomena – again, that would be infeasible to account for, and as such, impractical, in any system of nontrivial complexity. Hence, the usual definition is more useful since it is more local – it’s concerned with immediate consequences only. The assessment of what real-world problems these immediate consequences might lead to down the road is a task for us, professionals (in programming, information security, systems engineering, etc.). It cannot and should not be condensed in a mere definition, in the hope that it will clear up all possible doubt and misunderstanding.

The fact that many people, unfamiliar with lower-level details of (systems) programming, confuse the generic notion of “any bug” with the specifics of memory and thread unsafety issues is unfortunate, but the right solution is to educate such future potential users of Rust, instead of diminishing the merits of the language or falling back to even more obscure, 100-word mini-lessons in systems programming whenever/wherever we want to “sell” Rust.


From the technical side of things, this is actually an instance of “you can prove anything if you redefine your terms” as much or more so than “memory safety” is from the colloquial side, since the definition of UB is language-dependent. For example, Java does not make data races UB (it instead enforces a very, very weak bare minimum of memory consistency on all memory accesses) but that does not make those accesses significantly more useful for avoiding threading errors. Likewise, it would not be difficult to define a language which does not consider use-after-free UB (though it would be more difficult to make an optimizing compiler for it).

It is also so jargon-y that I think people unfamiliar with this technical meaning are unlikely to get anything out of reading such a statement. You might consider this an upside as it might lead to them looking up the technical meaning, but I just argued why the non-technical meaning has issues.


On the other hand, I’ve seen people refer to Go as memory safe (because GC) and “better at concurrency” (because channels).

IMHO Rust gives much stronger guarantees in these areas, so it is justified to be bold with its claims.


In the C world many safety features (aslr, r^x, hw pointer tagging) focus on catching invalid access and limiting the damage by crashing the program at runtime, rather than preventing the errors from being written in the first place.

So even when in both cases exploits are prevented (and C calls it safety), there’s a qualitative difference in how it’s achieved. I’m not sure how to phrase that without implying Rust prevents (all) bugs…


I’m really with @H2CO3 and @kornel on this one, but to support @sanxiyn’s point, I’d say that memory safety could be made more explicitely rephrased as memory access safety which is less “ambiguous” while remaining as complete, for the mere prize of a single added word

Regarding thread safety, there is no such rust claim that I am aware of, rust actually claims to be data race free, and even quickly nuances that one (regarding OS inherently “racy” nature):


It’s pretty clear that Sanxiyn’s quote is paraphrasing an incorrect assumption that they fear is commonly made. That is the entire point of the post! And in fact the sentence in question is even in quotation marks.

This is similar to how I usually express Rust’s safety value proposition. I have said something like:

Rust guarantees by default that undefined behavior won’t occur even in a multi-threaded, shared-memory context.

I find it best to explain this while wearing this shirt:


It is, but it also sounds like OP finds it justified. That is what I reacted to. (Which should be clear from the rest of my post – that is, my point was that removing Rust’s claims of memory safety as if we were guilty of deception is not the solution.)


Yes, I do think it is somewhat justified. I think arguing definition of safety is analogous to shouting “free software is not about free as in beer” or “hackers are not crackers”. I disapprove of both, and I think people should just concede defeat. Language is owned by people and usage, not by dictionary or textbook.


I really dislike that argument, to be honest. If we resort to such extreme linguistic positivism that we accept that “words mean whatever each individual wants them to mean”, then language ceases to be a useful means of communication.

This is especially true in technology. Definitions, however arbitrary, are important, because we wouldn’t stand a chance of any sort of precise communication if definitions aren’t fixed. Furthermore, definitions should be useful too – but when they imply effects that can’t (or are at least unreasonably hard to) be proved or disproved, they lose their usefulness; consequently, such a definition is approximately no better than a nonexistent one.

This is why “every possible bug that might ever result from doing X” (the core of what some people might consider “unsafety”) is not useful: it’s so broad, so complex, and so hard to quantify that one doesn’t really gain any information by using it, and this is why I don’t think it’s a good move to allow this overly liberal interpretation. At best, it doesn’t provide any new insights, at worst, it just causes confusion.

My opinion is that if one wants to participate in a discussion about technology (for instance, in this case, to judge the proposition “Rust has memory safety”), they should learn the vocabulary corresponding to the technology at hand, as not doing so only leads to chaos and unnecessary debates. In this sense, yes, dictionaries and textbooks do (or at least should) define at least the part of the language that is concerned with technology.

As a closing thought: it is an unfortunate fact that natural languages are fundamentally unsuited for a 100% correct description of scientific truth… however, I don’t think it means that we shouldn’t at least try our best. Yes, for such a 100% correct and precise description, one would ideally, rigorously and mathematically, formulate Rust’s operational semantics.

That is not a great way to do marketing though, when newcomers just want to quickly know why they should even consider trying Rust. So, as long as we have more to say in detail, and as long as we don’t claim that “Rust prevents absolutely all possible bugs”, I don’t consider the “Rust is a memory safe and thread safe language” tagline the least bit dishonest, misleading, or even just an unfortunate wording. It’s a fine formulation in its own context.


In other words, if Rust is not memory safe, then neither is Java, and if Rust is not thread safe, than neither is Erlang. Since both can leak memory and the both can deadlock, yet are considered “safe languages,” then deadlocks and memory leaks must not rule out Rust’s claim to be safe.

In addition, all Turing-complete languages can leak memory, and any language with a Pi-calculus equivalent concurrency model can deadlock, so if Rust isn’t safe, then nobody is.


… I have not heard this before. What does a “memory leak” even mean in the context of a minimal Turing machine model where memory is on a “tape” that is always accessible?


Well, obviously you can write a Turing-complete language that simply doesn’t allow you to allocate memory, and that trivially won’t leak.

I’m guessing what @notriddle mean was that, given a Turing-complete language that allows allocating/reading/writing/deallocating memory at any time, proving that a piece of code actually deallocates all the memory it allocated would probably require solving the halting problem. In practice, that would include every language that lets you write functions returning a dynamically-sized container or raw pointer to memory, which is basically every language we care about. So there’s probably no meaningful “no-leak guarantee” we could ever provide, other than the trivial one for programs that never allocate at all. That’s before we even get to the fact that whether memory has “leaked” or not is entirely a matter of programmer intent; even if you had a program where no function ever returned a container/raw pointer, there’s still plenty of ways to “leak” memory inside that function.


That still doesn’t make sense to me. The halting problem is unlikely to be sufficient to prove the statement, since most leak-prevention techniques amount to some sort of garbage collection, which is a runtime activity. Techniques that trick garbage collectors are things like static values and circular references; but those types of features aren’t required for mere Turing-completeness.


Runtime scanning can detect objects that lack a reference path from the roots. Runtime scanning cannot detect ye olde “toss it in a HashMap and forget the key”. Though the more direct proof that a perfect garbage collector must also be a halting oracle would be this code:

let my_garbage_collected_object = Gc::new(vec![1, 2, 3]);
arbitrary_fn(); // <-- if arbitrary_fn() never returns, then my_garbage_collected_object should be freed

The ability to make an object unreachable by entering an infinite loop before you get around to touching it again is very much required for Turing completeness. Well, alright, there’s also the possibility of not having objects and allocation at all, and just operating on an infinite array like Busy Beaver or Brainfuck, but assuming you’re actually doing resource tracking, then proving reachability of an object is equivalent to proving reachability of code, which is not generally possible even for the simplest implementation.


I feel like this tangent isn’t really on topic and has gone on too long, so perhaps we should move the conversation elsewhere (CS.stackexchange?). But I am still not convinced that the above statement is meaningful.

Sure, but hashmap memory is accessible in other ways (particularly iteration, or by simply clearing the map), so this isn’t really a “leak”.

…not a leak, at least in any meaningful sense. Just because the program won’t actually touch the memory again doesn’t mean it’s “unreachable”.

That might be where we disagree. “Reachability” has nothing to do with detecting whether or not a program is actually going to use memory again or not; it only relates to whether the program has referential access, directly or indirectly, to that memory. And, as garbage collection in general shows, this concept is not equivalent to proving the reachability of code.