Disclaimer: I've been retired from this field for a few years, so don't know current common practice (and never did know worst practice, as that is always a closely-held supplier secret, at least until public post-upset event analysis ).
During my era safety shutdown systems for reactors were always implemented in relatively-transparent hardware using redundant sensors, actuators and logic. Common-mode failure analysis was used to avoid obvious shared failure mechanisms. Nevertheless the designers were human, so some unanticipated common-mode failures were not caught (e.g., the failures present in Three Mile Island and Chernobyl, which were in part due to human operator actions).
Rust, like Ada, is an obvious long-term candidate base language for safety-critical software. However it's virtually certain that a constrained safety-critical subset language will be required, similar to Ada SPARK. That subset likely will employ design-by-contract (DBC) and formal verification tools, similar to SPARK.
Addendum to bring this post on-topic: RustBelt and Stacked Borrows provide some of the theoretical underpinnings for the safety analysis of Rust and its libraries (e.g.,
libc). Of themselves they are insufficient to demonstrate the safety of Rust code, which demonstration inherently must reflect the intended purpose of the code.