Stacked Borrows talk @ POPL2020

The recording of my Stacked Borrows talk is available online now:

If you want to know more about this work, see and


The talk is excellent btw. Very approachable.


So how "production-ready" is Rustbelt, exactly? Is it mature enough to justify use of Rust in safety-critical systems?

I ask because I'm working in a sector with a lot of safety-critical software (nuclear power) and I'd be interested to know if I can point my bosses towards Rust yet.

1 Like

For safety-critical usage, you probably need something like Sealed Rust. As I understand it so far, RustBelt primarily focuses on verifying the semantics of Rust and the soundness of the "scoped unsafety" principle (as well as parts of the standard library), but Sealed Rust would be a subset of Rust actually certified to do exactly what it should be doing.

1 Like

My knowledge of safety-critical systems and their certification is very limited, but RustBelt is certainly not aimed at anything that has to do with convincing certification authorities. :wink:

Indeed "Sealed Rust" looks to be much more up that alley. Its goals are fairly orthogonal to RustBelt -- at least to my knowledge, certification of safety-critical systems does not usually entail a full formal verification in a proof assistant. We have proofs that give us the utmost confidence that our model of Rust and these data structures are sound. But for safety-critical applications, we have not done nearly enough work to make sure that the model matches the reality. Conversely, I expect Sealed Rust to spend a lot of effort on things like testing the compiler to match some spec, and less on verifying formally that RefCell cannot possibly cause UB in safe code no matter the code.

(Also did I mention that the thought of having to do anything with the software that runs a nuclear power plant is really, really scary?^^)


So, dumb question, but what is it for?

Fun fact: nuclear power control systems don't actually run software, in the "lines of code" sense. They are actually controlled by "hand-made" electronic circuits, that are drawn by engineers using specialized software for circuit-drawing, and then go through like a billion layers of verification.

(at least that's how Framatome does it)

These circuits are either built as-is, or compiled and emulated on a CPU or FPGA or some other kind of specialized chips, depending on the plant. Most countries require at least two different types of architecture to avoid any possibility of common-cause failure; Britain in particular requires that at least one layer of redundancy be a physical, non-emulated electronic circuit.


Looking at their blog posts, it sounds like we're gonna have to wait a few more years before it can be sold to managers in safety-critical domains who have no prior knowledge of Rust.

Disclaimer: I've been retired from this field for a few years, so don't know current common practice (and never did know worst practice, as that is always a closely-held supplier secret, at least until public post-upset event analysis :sob:).

During my era safety shutdown systems for reactors were always implemented in relatively-transparent hardware using redundant sensors, actuators and logic. Common-mode failure analysis was used to avoid obvious shared failure mechanisms. Nevertheless the designers were human, so some unanticipated common-mode failures were not caught (e.g., the failures present in Three Mile Island and Chernobyl, which were in part due to human operator actions).

Rust, like Ada, is an obvious long-term candidate base language for safety-critical software. However it's virtually certain that a constrained safety-critical subset language will be required, similar to Ada SPARK. That subset likely will employ design-by-contract (DBC) and formal verification tools, similar to SPARK.

Addendum to bring this post on-topic: RustBelt and Stacked Borrows provide some of the theoretical underpinnings for the safety analysis of Rust and its libraries (e.g., core, libc). Of themselves they are insufficient to demonstrate the safety of Rust code, which demonstration inherently must reflect the intended purpose of the code.


It's about making sure that Rust-style borrowing with lifetimes actually works (i.e., that it excludes all memory and thread safety issues), and that APIs like Rc, Mutex or RefCell are actually safe to use by any possible well-typed client (i.e., that the unsafe code in there is properly encapsulated -- and that encapsulation again relies a lot on lifetimes). This is validating some of the core concepts of (idealized) Rust, in an iron-clad machine-checked mathematical proof.

We did not have the primary goal of ensuring that the implementation of these concepts is correct. Formally verifying rustc and its LLVM backend would be... rather impossible, I'd say, except if some billionaire shows up and wants to fund it. :wink: