Uninitialized memory

There's been a lot of discussion recently about uninitialized memory in Rust, as it relates to the IO changes and more generally. Quoting from a recent RFC:

Exactly what is guaranteed by safe Rust code is not entirely clear. There are some clear baseline guarantees: data-race freedom, memory safety, type safety. But what about cases like reading from an uninitialized, but allocated slice of scalars? These cases can be made memory and typesafe, but they carry security risks.

In particular, it may be possible to exploit a bug in safe Rust code that causes that code to reveal the contents of memory.

Consider the std::io::Read trait:

pub trait Read {
    fn read(&mut self, buf: &mut [u8]) -> Result<usize>;

    fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<()> { ... }

The read_to_end convenience function will extend the given vector's capacity, then pass the resulting (allocated but uninitialized) memory to the underlying read method.

While the read method may be implemented in pure safe code, it is nonetheless given read access to uninitialized memory. The implementation of read_to_end guarantees that no UB will arise as a result. But nevertheless, an incorrect implementation of read -- for example, one that returned an incorrect number of bytes read -- could result in that memory being exposed (and then potentially sent over the wire).

That RFC, which is now closed, proposed a possibly too-radical change to address these concerns. I wanted to raise this issue in a less formal setting so that the various stakeholders can discuss it more broadly, and perhaps collectively produce a good solution.

As an important data point, zeroing in functions like read_to_end can have a 20-25% overhead in somewhat realistic benchmarks.

Ideally, we would resolve the question about these IO functions by setting a policy at least at the level of std, if not for the language/library ecosystem as a whole. Some questions:

  • Is it possible to set a policy that people will actually follow (and will not cause a mass use of unsafe functions to work around perf problems)?

  • Is it even possible to set out a policy that can rule out a safe function like read_to_end while still allowing an unsafe variant? (And not inadvertantly ruling out other things?)

  • More broadly, this is all addressing just one narrow security problem. In general, what ambitions should we have to provide help here that goes beyond undefined behavior in LLVM? What are the tradeoffs, and how should we think about them?


cc @kmcallister @sivadeilra @pcwalton @mahkoh @nikomatsakis @alexcrichton @huon

This benchmark is zeroing all the buffer but is there a real use case to read a big raw uninitialized buffer like this?

I think the correct solution to this is to pass in a buffer object that can keep track of its length.


UPDATE: I have indeed misunderstood the original issue that spawned this discussion. This comment is out of date but I am leaving it.

I am just going to paste my RFC reply here:

I have a few points to make

First, I am somewhat on the fence about this RFC, but leaning towards a :+1: To be explicit, I think that I am OK with:

Set an explicit policy that uninitialized memory can never be exposed in safe Rust

Maintaining this invariant is possible in std::io without zeroing out any memory. The implementation of Read::read_to_end() never allow access to uninitialized memory.

However, this RFC doesn’t actually address rust-lang/rust#20314. The original issue says that Read::read_to_end() needs to be marked unsafe because implementations could be buggy. It’s basically saying that any code that, given a bug in the implementation, could possibly expose uninitialized memory to the user needs to be marked as unsafe.

So… If I misunderstood the original issue, please forgive me, but this request seems very unreasonable to me. Virtually every single type in the rust ecosystem can expose uninitialized memory to the user given a bug in the implementation. And yes, this means that bugs in the implementation could be exploited by an attacker to access secrets in memory like passwords etc… but this is true for all code everywhere.

So, to recap, this RFC does not require any zeroing out of memory in std::io. The original issue this RFC links to is not reasonable (as I understand it).

Ok, so my previous comment did, in fact, misunderstood the original issue. So, apologies for that (I will leave the comment for reference though). The argument is that Read::read_to_string() passes uninitialized memory to implementations of Read::read(&mut buf). So, the end user can make a custom implementation of Read

I am still leaning towards a +1 for this RFC. I think Read::read_to_end should zero out memory before passing it to read, incurring the performance hit… but maybe that’s also because I have been against adding Read::read_to_end() to the Read trait in the first place…

I do agree with @tbu. The “safe” abstraction would be for the Read trait to take a buffer. This way one can prevent uninitialized memory from leaking to the consumer 99% of the time.

I have implemented buffer traits here: https://github.com/carllerche/bytes. The one difference that I would make if this RFC passed would be to make the following fn unsafe: https://github.com/carllerche/bytes/blob/master/src/lib.rs#L126

Note that this fn is rarely called by end implementors, but is only used “at the edges” by lib authors to hook up buffers w/ their IO types (like sockets).

Also pasting RFC comment:

The io::Read trait attempts to use an out-pointer for efficiency and to avoid double buffering - but this pattern is almost explicitly not supported in current Rust. The crux of the issue seems to be how you deal with incorrect implementations which read this out pointer instead of writing to it.

The flaw in this design is obvious if you try to generalize Reader and Writer to general types instead of just u8. It’s obviously incorrect to pass an uninitialized &mut [T] to a function, but we sort of paper over this issue for the special case of T = u8, leading to this problem.

I think this special casing is actually incorrect, and just like with the placement new issues, we actually need more advanced language features to properly deal with this, particularly some form of &uninit or &out pointer. Without it, I don’t see how this pattern can be made safe and generalized without implying the same O(n) overhead as with double-buffering.

An alternative fix/workaround for this specific case is for read to take a WriteOnly<[u8]> or similar wrapper which emulates &out, which allows only writing to the underlying type, with reading only allowed through an unsafe API or just forbidden outright.

EDIT: If we re-specialize WriteOnly for bytes, then I think it basically becomes the MutBuf trait from bytes (+1 to that idea, then).

I’ve found the Buf/MutBuf abstraction, with advance, is actually nicer to work with than slices directly in many cases.


I would suggest that this discussion should lead to an RFC regardless of the outcome, if only to clarify what’s going on. Right now it seems to be the case that:

  1. It is not undefined behavior to hand safe code references to undefined bytes.
  2. However, it is undefined behavior for that safe code to read such bytes. We have no way, in a function signature, to distinguish arguments that will be written to from those that will be both read from and written to, nor to enforce such a distinction inside a safe function itself. Therefore we are in the awkward position where you can start with an initially correct program, then change only an implementation detail of a completely safe function, without changing the interface, without adding an unsafe block (directly or through a call to a function with an unsafe block), and without changing the outputs, and still cause technically undefined behavior.
  3. It is undefined behavior to hand to safe code references to shared bytes, if those bytes may be defined or mutated by another thread at any moment, since this is a data race.
  4. It is undefined behavior to hand to safe code, or in any other way use, primitive types that may have invalid values (e.g. references that may be dangling), even in private fields. This by extension rules out passing uninitialized data of these types. Presumably, this rule is intended to apply to &[T], and to guarantee that not only the reference but the length part of the fat pointer is valid.
  5. It is not explicitly disallowed to hand types with dangerously broken invariants to safe code. E.g. passing a Vec with a correct pointer, but a wrong length and capacity. However, it is UB to actually use such objects in a way that causes problems, which should ultimately only happen in unsafe code. E.g. indexing into a vector implicitly calls a function that does pointer arithmetic in an unsafe block.

With that out of the way, my thought on the performance problem at hand.

If the desire is to a problem with any arbitrary read implementation, then we have to worry about things such as a read implementation calling broken FFI functions, which can report a wrong number of bytes written to the buffer you hand them. This is really insoluble unless you zero out the buffer. But I think that this is likely to be a waste of time, since one should really just accept that any broken unsafe code, including FFI calls, can allow any number of bad things to happen.

However, if you only want to guarantee that an entirely safe function does not read uninitialized data, then I think @carllerche and @reem are on the right track; we just need some way of representing a reference to write-only memory, without exposing that memory to safe code. In that case, something like the original RFC can be passed without a significant, unavoidable performance hit that drives people to use unsafe.

The only other question I can see is whether we can also guarantee that the safe code writes the number of bytes that it says it does, which is a little bit harder, and perhaps not strictly necessary, but also desirable as a safety guarantee. If I understand correctly, I think that it could be done with a variant on MutBuf where advance is also unsafe (preventing the “skipping” of any bytes in completely safe code), and assuming that the user (e.g. read_to_end) determines the number of bytes by using the remaining function, rather than assuming that some independent byte count (e.g. the return value of read) is correct. (It might be best to provide a consumed query function for MutBuf that exposes the internal pos in this case, to avoid unnecessary cruft code to subtract remaining from the total capacity.)

In this case, a completely safe read_to_mut_buf method (e.g. doing memory-to-memory copies of bytes) can use the safe interface. A read_to_mut_buf method that has an unsafe block (e.g. doing I/O through an OS or other FFI call) can use the unsafe functions, which will be faster anyway in some specific cases.

This does involve a change to the Read trait, which is unfortunate. That churn could be ameliorated for users (but not so much for implementers) of Read by keeping the current read with the byte slice argument as-is, except that it would now be considered UB to hand it an uninitialized slice. It would have a default implementation that uses the new method (read_to_mut_buf or whatever it will be called).

Similarly pasting from the RFC comment.

@reem: Well, a “safe” option with a more forgiving contract might be to pass in an &mut Vec.

In order to be efficient, the callee trusts that there’s at least some capacity reserved, and the caller trusts that read_to_end() will restrict itself to Vec::capacity().

But if the contract is violated, the cost is “Oh no, we had an annoyingly short outbuffer” or “Oh no, we allocated” rather than “Oh no, we read uninitialized memory.”

Of course, that doesn’t permit windowing a larger buffer for multiple reads like &mut [u8], so it’s not a full solution (because you can’t prevent a malicious Read impl from clobbering earlier data in the buffer, and can’t “shrink” the buffer from the left either) - but I think that Vec::with_capacity() is probably a useful model for “How to do this right”

Heck, if someone makes a VecSuffix<'a, T> type, that’d possibly work out pretty darn well.

So, I pondered this issue a bunch more over the past few hours and would like to share.

First, I now completely agree w/ the original issue filed. It is a serious issue that one could write rust code w/o any unsafe blocks and be able to access uninitialized memory. In fact, I do believe that it is a violation of the "memory safety" promise that Rust makes.

I have been trying to come up with a good way to define an invariant to prevent this. How about:

A Rust crate is memory safe if for all consumers, accessing uninitialized memory implies that the consumer has at least one unsafe block.

A consumer of a crate is any unit of rust code that uses the API of the crate.

(note that this isn't the only requirement for a crate to be safe)

I want to re-iterate what I state above. Fixing this issue is possible without any performance hit (and does not require zeroing out memory) if there was a buffer abstraction present. The problem is that this would be a fairly major change to make and it is very late in the game.

how do you define a “consumer”?

A consumer of a crate is any rust code that uses the API of the crate. So, a child dependency of the crate, or a rust end product (executable), etc…

For reference: RFC #98: Uninitialized Pointers.

I think there is general consensus around this point that various folks (@reem, @carllerche, @quantheory, etc) have made:

  1. To make this API safe, we need a variant of read which takes an “out-pointer-abstraction” of some kind.

The problem is that modifying read now feels late in the game. Moreover, the best design for an “out-pointer” abstraction remains just a bit unclear. Something like MutBuf may fill the bill, but that feels like a major addition to std that should be given time to bake (Carl’s library as is doesn’t seem right, because the use of a trait implies that read would have to be either generic or take a trait object, neither of which seem acceptable to me). The other proposals (&out) are even more extreme and require extending the core language itself. So I feel like neither of these is “ready to ship” at this point nor likely to be by 1.0.

But there seems to be another way out, as @quantheory suggested. We can approach this in stages. For the moment, we modify read_to_end so that it initializes the memory in some way and calls read. Later, we can introduce a new version of read based on some form of out-pointer (let’s call it read_buf for now) and modify read_to_end to call that instead. This phasing seems to give us a complete and convenient Read trait now, albeit one that is somewhat slower when calling read_to_end than it might be, and the time to work on the design later.

The main downside in staging things this way is that we don’t be able to setup the defaults as we would like. We would prefer for read to default to calling read_buf, but to keep things backwards compat we’ll have to have it setup the opposite way – the default impl of read_buf will somehow prep the buffer (initialize it, whatever) and call to read (in the same way that read_to_end used to do). Well, actually, we could have a “bi-directional” fallback, such that you have to impl at least one, but you can impl either one. However we do it, newer code can simply impl read_buf to gain the full efficiency, and older code can just keep on using read without any performance hit.

As an aside, I’d prefer we not zero the memory in read_to_end, but rather initialize it with some other value (0xDEADBEEF or something), just because it’s less likely to be something people rely on. The docs don’t really have to talk about this at all, but if they say anything they can presumably say that the contents of the buffer are not defined but not “uninitialized” or something like that

First of all, I think adding methods to a trait backwards-incompatible, so we would have to add the method today.

Then, I think your proposed read_buf method should become read, so today’s read should be phased out and in the future we can re-add it with a proper solution.

I realize that I’m assuming here that we all agree that giving safe code access to uninitialized data is not permitted by the stdlib, at least. I think there is or can be legitimate reasons for unsafe code to manipulate (and even read from) uninitialized buffers, but this must be approached with care (and I, at least, need to educate myself to some extent on what is required to do this without triggering undefined behavior in LLVM). I am not sure whether there is ever a reason to provide access to uninitialized data to safe code – I’m inclined to say that no crate should do it, but I think at minimum the stdlib should not.

As an aside, what I mean by “safe” and “unsafe” code here is not “code in unsafe blocks”. I mean something fuzzier – in practice, the set of unsafe code that maintains a trust boundary tends to be wider than just what is contained in unsafe blocks. It tends to be all code in a module or related to a particular type or abstraction, etc. It’s worth spending some time on nailing this down at some point. But let me write something a bit more specific than what I wrote above:

  • No library should ever expose a value with undefined memory in some context where the library author cannot vet that this value is not being read.

So if you have a public library function and you return a buffer that contains uninitialized bytes, that’s bad. You can’t enumerate all the callers and you can’t check them. Similarly, if you (as is the case here) pass a buffer to a callback, that’s bad. But if you have a private function that does some of those things, and you are careful in your code base, that could be fine.

1 Like

I agree w/ the “late in the game” problem.

I do want to point out that implementing read_buf in terms of read would have the same problems as today. read_buf would have to initialize memory in order to be “safe”, incurring a penalty hit, so it wouldn’t be a good situation. read_buf would have to be implemented directly.

As you mentioned, one way to do it would be to provide circular default implementations, requiring a trait to implement EITHER Read::read OR Read_buf. This would require the language to better support traits w/ circular default implementations (a good error message indicating that at least one of the two functions must be implemented).

I have a similar situation with my OpenGL wrapper.

In OpenGL it is possible to map a buffer located in video memory so that its content can be read or written directly by your program.

Exposing the buffer mapping to the public API is very important for performance. Modern and next-generation rendering techniques involve keeping buffers mapped continuously and changing their content between draw commands. And it's not a "small" optimization, some extreme examples show frames per seconds being multiplied by 10 or 20.

Part of this performance gain comes from the fact that you map the buffer in write-only mode, which is what you should do if you want to just write the content of a buffer (just like the read function of the Read trait). The problem is that OpenGL doesn't allow reading from a buffer mapped in write-only:

No GL error is generated if the returned pointer is accessed in a way inconsistent with access (e.g. used to read from a mapping made with access GL_WRITE_ONLY or write to a mapping made with access GL_READ_ONLY), but the result is undefined and system errors (possibly including program termination) may occur.

And in this case writing arbitrary data is not an option.

I'm in favor of a WriteOnly type or similar.

1 Like

This would not require any such change, but I agree it would be more friendly. Otherwise it just means the code will loop infinitely if you fail to implement one or the other.

It is not backwards incompatible to add a method so long as a default is provided. This is precisely why I was discussing the setup of defaults.