I don't know if that's reasonable, but that's the minimum to ask from the community for a healthy ecosystem. And I'd say that's already an existing (possibly implicit) rule. Crates violating this rule get an advisory on RUSTSEC like RUSTSEC-2025-0143 for example.
Maybe asking for the scope of unsafe to never escape the closest enclosing module instead of the crate is also acceptable. We have to compare how many crates violate that and what would we gain if they did not.
Struct invariants alone may not be sufficient in some cases. Do you think the following example (discussed in Section 5.3) is appropriate? The struct has no invariants; get is declared unsafe not because it violates any struct invariants.
When you say controversial, do you mean that the Weak Struct-level Soundness Criterion contradicts the practice of relying on module-level safety invariants adopted by most crates?
As far as I know, the module-level criterion is not sufficient to fully characterize the soundness criteria adopted in the Rust Standard Library and Rust-for-Linux. See the discussion in Section 3.2 below.
Struct invariants are just a lower bound of what are necessary for soundness, any function may choose to add extra invariants of its own. See for example the various assume_init methods, they require more than just the MaybeUninits weak invariants to be sound.
I have confirmed with the Rust team that module-level soundness is the default criterion adopted in the Rust standard library. Section 3 has been revised accordingly.
In addition, I have added the Principle of Least Unsound Scope to emphasize that unsafe code should be confined to the smallest scope necessary, with safety enforced as early as possible.
Cool! Thanks for checking. Curious if that rule is enforced in any way (like the crate-level one is by RUSTSEC) and if the consequences of this decision have been measured, like in the conclusion of The CXX Debate :
Sometimes, repeating the same thing over and over again makes mistakes more likely, not less.
EDIT: Ah my bad, the Rust team only talked about the standard library... So it doesn't say anything about crates.
Exactly. assume_init imposes extra safety requirements because of the unsafe operations inside its implementation.
pub const unsafe fn assume_init(self) -> T {
// SAFETY: the caller must guarantee that `self` is initialized.
// This also means that `self` must be a `value` variant.
unsafe {
intrinsics::assert_inhabited::<T>();
// We do this via a raw ptr read instead of `ManuallyDrop::into_inner` so that there's
// no trace of `ManuallyDrop` in Miri's error messages here.
(&raw const self.value).cast::<T>().read()
}
}
The library contract of any unsafe fn can be more restrictive than what is actually inside the source code, and it is more than the sum of the calls of that function. This is because it implicitly is stating that it is not a "breaking change" for the function to internally perform an operation that looks like
if any_described_function_invariant_violated(&world_state) {
unsafe { ::core::hint::unreachable_unchecked() };
}
Notably, unreachable_unchecked() is a function call that has an infinite demand on the caller: you must not call it, and its safety obligations can never be satisfied. This therefore makes control flow reaching it into an impossibility that can be removed by the compiler.
Without such arbitrary detection-then-UB in the source code, of course, it does not actually exhibit UB. Writing such may even be impossible in a given function. And with such a UB-free source code, we can say the program is UB free based only on the sum of those calls. But the agreement between the developer who writes any unsafe fn and specifies a requirement, and the person who calls it, is not that their code doesn't exhibit UB. They are instead at the mercy of the library's author. It could be UB in a future version, with no break in its library contract.
People usually are seeking to not only have a correct program, but for their usage to be stably so over multiple compilations, even after updating the compiler and their dependencies (assuming only minor API expansions and no breaking changes, of course).
Your point about the implicit right to insert UB is critical. This focus on "cross-version stability" is exactly what seems necessary here.
As a developer deeply invested in code safety and soundness, I'm curious: How do we translate this future-proofing requirement into the actionable guidance of the Standard? Should the RFC explicitly state that any undefined behavior in the contract (even if unreachable in the current implementation) must be treated as a hard failure? I'd love to hear your thoughts on how to word this to ensure developers take the documentation as a strict constraint rather than a suggestion. Thank you~
I think that's reasonable for explicitly-documented safety preconditions and similar "do not do this" requirements, but with an exception noted for pinned dependencies and/or pinned compiler versions (and other similar edge cases where the developer has more control than usual).
I'm not aware of any cases of an unsafe function with explicitly-documented safety conditions where I wouldn't consider violating the safety conditions to be a hard error (outside of contrived examples I could make as a counterexample to this statement), but there are plenty of people with far more experience than me on this forum... maybe someone else has an example.
There's unsafe code in gray zones, though, that I would consider to be risky but not a hard error. In those scenarios, the unsafe code is making assumptions that are reasonable but not explicitly supported by documentation or similar. Whether you want to categorize that as "the contract is not fully and explicitly documented, and therefore determining the defined behavior of the contract is hard" or "the contract consists of what can be determined from explicit documentation, and therefore this behavior is undefined" depends on what semantics you prefer.[1]
I think of how bumpalo-herd uses unsafe lifetime extension, resulting in situations that contradict the the internal safety comment on the impl of Send for bumpalo::Bump. Of course, I know that bumpalo will never actually make a "technically non-breaking" change that breaks bumpalo-herd, but bumpalo doesn't document sufficient conditions to soundly lifetime-extending stuff allocated in a Bump. (I've literally used "bumpalo-herd does such-and-such" to justify some of my own unsafe code.)
Maybe there's a jargon definition of "contract" relevant here? If so, I don't know it, so I apologize if some of my discussion here is irrelevant. ↩︎