Recently, while trying to reduce the amount of UB in abomonation, I discovered one interesting roadblock.
Currently, abomonation writes down arbitrary binary data into standard Rust std::io::Write streams by transmuting &T into a suitably large &[u8] and sending that slice of bytes to Write::write_all(). The resulting binary stream is to be subsequently consumed either within the same process, or by another instance of the same program...
...but this is incorrect according to our current UB rules, because type T may contain padding bytes, which act as mem::uninitialized::<u8>(). And passing down references to uninitialized memory to safe code (in this case the Write implementation) which may well decide to read from it is not good. The correct way to transmute a type T into its binary representation would be to transmute &T into &[MaybeUninit<u8>], not &[u8], and then we end up with a slice-of-bytes type that cannot be sent to std::io::Write...
As far as I can tell, this cannot be resolved without some kind of language or Write API change.
From some discussion with @RalfJung, it appears that two possible ways to fix this would be...
The good old freeze() can of worm. This is an elusive compiler intrinsic which would allow turning an uninitialized MaybeUninit<u8> into an arbitrary initialized u8 value.
This idea has been discussed for a long time, and was historically met with some resistance as it can ease some forms of information leakage if buffers with sensitive contents are not properly zeroed out before reuse.
More recently, it also emerged that it is not even clear whether LLVM can actually provide the required support for this intrinsic. And we need LLVM's help, as its normal uninitialized memory semantics are very weird.
Thirdly, freeze() is widely perceived to be a hack which can almost always be replaced by something better, and this sentiment proved rather accurate so far. Therefore, any attempt to get it into Rust would be met with significant resistance and require a strong motivation.
Extend std::io::Write so that it supports writing down uninitialized bytes.
This feels more right in principle, but it is not clear to me how this can be done in a manner that is backwards-compatible with existing Write implementations, without introducing a freeze()-based default implementation.
If it is not backwards-compatible, would we need a separate UninitWrite trait?
May I request your opinion on if I have overlooked anything somewhere? In particular, since neither of the above options are particularly appealing, "third option" solutions that go for something completely different are much welcome.
It would also bring other benefits, like the ability to stuff data into the space that today is wasted on padding bytes. However, I feel actually delivering this feature might be even harder that either of the aforementioned ‘unappealing’ options.
And even then, it would not work for #[repr(C)] types, which may contain inner padding bytes.
Never mind, I realised this would not help even for Rust ABI types. It's still possible to have internal padding even when stride ≠ size.
struct X {
a: u32,
b: u8,
}
struct Y {
a: X,
b: u16,
}
Assuming X is laid out as (u32, u8) (the obvious choice with no padding at all), you can either lay out Y as (X, u16) (which will have 1 byte of padding after X), or as (u16, X) (which will have two bytes of padding before X).
Well, at least it cannot compile to a no-op, anyway.
Yes that transmute seems correct (assuming appropriate length), but then what’s a correct implementation of a UninitWrite trait that does anything useful with it? Is it sound to pass uninitalized bytes to libc::write, for example?
I would be tempted to reply that the Rust memory model does not need to encompass every single weird thing that the Linux virtual memory subsystem does, and that we could handle that sort of thing via either volatile accesses or a new wrapper type (let's call it WriteSensitive<u8> for the purpose of discussion) which highlights the fact that writing to a certain memory region has side-effects.
However, the fact that even plain malloc() is similarly ill-behaved on Linux could make this an untenable position... And I wouldn't be surprised if other OSes engaged in similar but subtly different variants of those tricks, making it harder to provide an abstraction that's both portable and reasonably amenable to compiler optimizations.
@felix.s An even simpler way to get padding in repr(rust) is via slices. For optimal performance, one would want &[T] to be written down in a single write_all() operation, not one per slice element, but that's not generally possible if there is no way to send padding bytes to Write.
That's a very good question. Since Rust's move operation is pretty much defined to be implemented via libc::memcpy (or more precisely a compiler intrinsic that may or may not call that function), there is prior art for sending uninitialized bytes to libc functions that will read from them. But I'm not sure if we have a clear policy somewhere about which lines must not be crossed when doing so.
I think I once saw @RalfJung say that the UCG group was considering making it legal to copy uninitialized bytes around, but not to do more significant things with them such as comparing their values to that of initialized bytes. I'm not sure if that would be enough the resolve the problem at hand, though, since we don't really know what libc::write implementations do.
Now that LTO between C code and Rust code is possible, which makes such behavior visible from the LLVM optimizer, this may become a more pressing concern.
And that’s just one of the things you might want to do in a UninitWrite impl. My point is that a different trait doesn’t solve this problem, only moves it.
I agree, and this is why I would personally be more in favor of a general mechanism for making padding bytes well-defined, like freeze() does. But since fully general freeze() seems hard (or impossible, depending on who you ask), maybe something of more restricted scope would help? I'm not sure where to tune the scope boundary here, though.
I agree it should be possible to write uninitialized memory to disk if desired.
But, well. This is reminds me of a story about a certain Game Boy Color game, whose developers built the cartridge ROM data using a program that left unused regions as uninitialized memory. The program ran on Windows, at a time when Windows did not provide memory protection between processes. As a result, every cartridge sold includes several kilobytes of data apparently originating from a web browser that had been running on the system. Specifically... it includes HTML extracts from a late 90's porn site. Really.
With this kind of example in mind, I'd be more interested in a way to mark types as needing deterministic padding bytes, avoiding the uninitialized memory altogether.
In order to resolve the abomonation issue, this mechanism would need to operate at the granularity of values, not just types, since we serialize and deserialize data of types from std which we do not "own" and cannot mark as needing deterministic padding.
Could something like, zero_padding( &foo ) work? Where "zero_padding" would simply be an intrinsic or special Rust compiler function that would, given a Structure, zero-out all the padding bytes within the structure making them well-defined. Or would this interfere with "niche" uses padding-bytes?
I personally like ::zerocopy's approach to the issue, which I have presented in detail in this post. That is, make structs that contain no "implicit padding" and check such property at compile time.
For those still needing padding bytes, you can just make "implicit padding" become "explicit" by adding padding fields:
If something like this can be provided by the compiler with an intrinsic it would be quite less cumbersome to do, while avoiding the "write 90s porn" path (which could lead to copyright infringements ).
In the meantime it looks like a PoC could be made within a classic crate and some custom #[derive(...)].
Does it? As I recall it, reading general uninitialised memory in C is undefined behaviour, full stop, no matter the pointer type. Padding bytes are subject to a special exception that makes them a weaker category of undefinedness.
This SO answer seems to confirm the latter (I don't have a copy of the standard at hand right now to verify myself):
This intrinsic would zero all padding bytes and then return a reference to the original object as a byte slice. The padding bytes are guaranteed to stay zeroed as long as the original object is not moved or modified, which is guaranteed by the lifetime of the returned slice.
Since this is a safe function, an immutable slice is returned. This prevents you from setting some field to an illegal value, but just looking at the bytes is fine.
I wonder though if this would break "niche" uses of padding bytes/bits? That's not a thing correct? Are "niches" restricted to unused bits/bit patterns of actual types and padding can never be used as a "niche" (for example to encode the variant of an enum)?
I have not messed around a lot but since this is a "compiler built in" I would assume that it would take account for that since it knows where it stores that information.
"niches" use otherwise-unassigned code points in defined type representations. Padding bytes, by definition, have undefined content, so cannot be used for "niches" or anything else.
It might be an interesting repr (or something) to say that the padding must be zero -- that gives more niches for optimization, and on small types might not be much of a performance difference since it'd just be writing the whole object at once anyway...
What are the downsides/problems if Rust just says padding bytes have unspecified values but are safe to read as u8 (after an unsafe transmute of course)?