Make mem::uninitialized and mem::zeroed panic for (some) types where 0 is a niche

For some time now, mem::zeroed, mem::uninitialized and MaybeUninit::assume_init call the panic_if_uninhabited intrinsic, which monomorphizes to a panic for uninhabited types (types considered uninhabited in their layout). This prevents bugs like this, and it provides a clear warning signal: if you are providing a generic API and users can trigger this panic, they can also trigger UB.

For mem::zeroed, we could actually do better: we could cause a panic if the type is something like a reference or fn that has "0" in its niche, and we can check this recursively inside structs/tuples. The nomicon calls producing such values out as being UB. And we could do the same for mem::uninitialized. (In principle, mem::uninitialized could get an even stronger check, but starting with the types that do not accept the all-0 bit pattern seems like a good idea.)

Of course, there is the risk that people replace mem::zeroed() by MaybeUninit::zeroed().assume_init() to work around the panic. But assume_init has been documented from the start to not permit this, while mem::uninitialized/mem::zeroed were not properly documented for a long time.

However, as reports like this show, there is code out there (probably more than we know of) that uses mem::zeroed/mem::uninitialized incorrectly, and that would cause this panic all the time. This recent PR caused SIGILL for at least some code that uses mem::uninitialized on types with a niche. This is somewhat comparable to the panic-on-unwind-across-FFI situation. Thus we should probably wait a bit before making this a panic, or provide a flag so users can test their code before this becomes the default, or run this through crater, or something like that? What do you think?


I think this is a solid idea for two reasons:

  • It prevents LLVM from getting excited over getting to dereference a null &T speculatively.
  • It teaches people that, unlike e.g. in Go you can't just memset a pile of bits to zero and be all like "ah yes this is fine because I'm going to initialize this in five seconds". Crashes are much more instructive than your program mysteriously becoming faster because LLVM deleted your code and your lunch.

I would like this. I'd also like mem::zeroed() and MaybeUninit's assume_init/read/get_ref/get_mut to go a bit further and have a debug_assert! that checks and makes sure the bit pattern is valid.


The problem with debug_assert! is that std is compiled and distributed in release mode even for debug binaries. This means any debug_assert! in std is essentially a dead assert currently. I don't remember exactly how it happenes, but I remember seeing the way overflow checks are added to std called a big hack.


For some time now, mem::zeroed , mem::uninitialized and MaybeUninit::assume_init > call the panic_if_uninhabited intrinsic, which monomorphizes to a panic for uninhabited > types (types considered uninhabited in their layout).

But why not gives compile time error instead of runtime error in this case?


This is a post-monomorphization check, so we can't really give good compile-time errors.

What could be done is to have a trait that says "can be initialized with zeroes", and someone is working on that.

Indeed, doing more checks in debug mode would be nice. Also see this and this. But as @CAD97 mentioned, these are currently not very effective.

1 Like

Also #51713.

Could std do the zeroed check itself by making an auto trait like we do for the Freeze trait?



pub(crate) auto trait ZeroValid {}

use std::num::*;
impl !ZeroValid for NonZeroU8 {}
impl !ZeroValid for NonZeroU16 {}
impl !ZeroValid for NonZeroU32 {}
impl !ZeroValid for NonZeroU64 {}
impl !ZeroValid for NonZeroU128 {}

use std::ptr::NonNull;
impl<T> !ZeroValid for NonNull<T> {}

trait IsZeroValid {
    fn is_zero_valid() -> bool;
impl<T> IsZeroValid for T {
    default fn is_zero_valid() -> bool { true }
impl<T:ZeroValid> IsZeroValid for T {
    fn is_zero_valid() -> bool { false }

unsafe fn zeroed<T>() -> T {
    assert!(<T as IsZeroValid>::is_zero_valid());

That way it defaults to being allowed (so it's not a breaking change), but we can add new things to the "panics at runtime" list as we find more of them.

I think it'd be fine for Rusty types. The only case where I can imagine it going wrong is FFI code that erroneously uses fn() instead of Option<fn()> (because C doesn't care, and giving zeroed/uninit struct to be initialized with function pointers is a common C pattern).

Sorry to start off with bikeshedding but I recommend against the name IsZeroValid, or anything else that suggests this is the go-to solution for checking some aspect of validity. There's already lots of potential traits being discussed/proposed for marking when certain bit patterns or transmutes are OK for a type, let's not muddy that design space further by naming this thing, which exists only as a guard rail for mem::zeroed, in a way that makes it sound related to those traits.

Perhaps MemZeroedSanityCheck?

I was expecting that this would be a private implementation detail, like Freeze is, and thus the name would be ultimately unimportant. We can add , "It's unsound to use zeroed with this type" to the code sketch to hide it completely.

1 Like

This is in fact one reason why I want the panic -- people using fn() instead of Option<fn()> has been causing SIGILLs recently.

The advantage of an intrinsic is that we could also eventually make it check enums and see if 0 is a valid discriminant and if all fields for that discriminant are 0-valid.

But ultimately I do not care strongly about how this is implemented. The discussion was supposed to be more about whether we want this at all and on which timeline. We should probably wait at least until 1.39 which deprecates mem::uninitialized -- so there were three releases worth of time to port things to MaybeUninit.


Hmm, but any compile-time error better then crash in runtime. Even it says "somewhere in your 1M lines of code program there is code that do wrong thing", it is better then runtime failure.


There is precedent for the opposite (@oli-obk knows the RFC number): when we find out that a piece of code will definitely overflow or otherwise cause a panic at run-time, we have to (the RFC mandates) still only make this a warning, not a hard compiler error.

For example, a compile-time error can be worse if this is dead code in a crate you depend on (now your project doesn't compile any more after a rustc upgrade, great). There is a conflict between such compile-time errors and our stability guarantees.


Easy to remember number, it's like 1337, but not:

Note that you can always get the compile-time error behaviour via #[deny(const_err)]


Why opposite? I am not picky, if it is not possible to make error, warning would be also fine. Warning during compilation is still much much better then crash during running the program.

And what about rust edition, it is impossible to make transition from warning to error during edition = 20xy change?


@Dushistov then you will like the discussion here.

1 Like

This is a post-monomorphization check, so we can't really give good compile-time errors.

I think that a bad compilation error is better than a panic, in particular, when running crater.

I think we should do this, soon, but with care.

I imagine that code getting hit by this might not be easy to fix, e.g., because some dependencies are exposing FFI bindings that, e.g., use fn types instead of Option<fn> types in FFI or similar. If the libc crate does this somewhere, which it probably does, and some important dependency depends on it, it might take a while till the issue is solved.

So I think the first step would be implementing this, and doing crater runs iterations in which we try to fix the most fundamental crates in the ecosystem, filling issues, etc. EDIT: for these iterations, we can make these emit a compilation error, instead of a panic.

Once that's done, we can enable this on nightly only. Those who run into breakage don't need a flag, because they can just pin an older nightly until their issues are resolved.

Once this has been enabled on nightly for a couple of releases, only then we should consider promoting the check to stable Rust. We probably want to do a couple of crater iterations again with beta and stable Rust releases having this feature enabled, making sure that most of the ecosystem builds.

1 Like

That's a good point.

As discussed in this issue, while we cannot have a perfect lint, and (for the reasons given above) don't want a hard error, we can have a best-effort warn-level lint. So if anyone reading wants to help with this situation, writing that lint would be a great way to do that. :slight_smile:

The lint should find calls of mem::zeroed::<T> and mem::uninitialized::<T>, and it should look at T to see if it can definitely determine that this T is not valid for the all-0 bit pattern. This is basically about recursing into fields of structs and tuples to find a reference or function pointer. Even better if it can also find/identify NonNull and NonZero*. We stop when hitting a type variable, and prefer not to warn over having way too many false positives.