That is a strange argument to my ears. Why isn't the conclusion "... is currently not part of Rust's memory model, so we should look into adding it"? It seems you are assuming that it can't be done / is undesirable without explaining why.
If you'd read my full proposal, you'd see that I'm well aware of that - even talking about exactly the same LLVM instruction you mention on the same LLVM value.
I could see a strong argument for relaxing a little, and saying that the freeze
equivalent happens at the conversion from MaybeUninit<T>
to T
, instead of at the creation of MaybeUninit<T>
for types that are AllBitPatternsValid
, but there's a reason I described this as a big change.
Part of porting socket2 over to it would be to get it stabilized first. Given that it exists, solves the problem, and doesn't require some quite major changes to the semantics of MaybeUninit
, this is a better solution than trying to get major changes into MaybeUninit
, even though those changes are technically possible.
And the "several years to come, given the usual pace of things" also applies to trying to stabilize changes to MaybeUninit
- given that you're going to spend years either way, you'd be better off with BorrowedBuf
, which also resolves the issue of accidentally trying to read bytes from the buffer that weren't written by the receive call (since it tracks that, too), where a special case in MaybeUninit
would leave garbage at the tail of the buffer.
Disclaimer: I'm a member of T-opsem but am not representing the team nor speaking with any meaningful authority, but just as a reasonably informed observer with imperfect memory.
There is ongoing work here looking into it, including an open RFC. AIUI, there's two main reasons freeze
isn't experimentally available yet, roughly:
- It's unclear what the primitive should be. The in-place signature of
fn(&mut MU<T>)
that people often want is misleadingly wrong[1], andfn(MU<T>) -> T
is still unsafe because it can produce invalidT
(and will if it can), i.e. immediate UB, not to mention the copy elision optimization limitations with a type often used to reduce reliance on such by manually doing in-place initialization. So there's some mind to wait on project safe transmute to be able to provide a saferfreeze
operation. - There's value in not having a
freeze
operation and never being able to read the "value" of uninitialized bytes, in that that value is whatever digital secret would be the most damaging, and sanitizers prefer being able to diagnose all reads of uninit as UB and don't have good support forfreeze
. (Yet?) - Bonus: exotic targets that have an actual trap representation for uninitialized registers/memory (e.g. Itanium Not-a-Thing) can't provide a fully correct
freeze
operation at all. (You could actually argue that memsan is such an "exotic" target.) - Bonus: IIRC, we still support using system LLVM versions below that which introduced
freeze
. Using non-bundled LLVM without the custom Rust patches will always have some minor issues, but complete non-availability of a feature is a bit more impactful.
I suspect we will expose a freeze
operation eventually — at a minimum, it's possible to specify and implement inline asm!
that acts as an overly conservative freeze
, at least on the tier-1 targets — since the benefits for code people already want to write[2] most likely outweigh the costs. But it isn't likely to be anytime soon, and is meaningfully lower impact that other items on the lang team's plate.
People want
freeze
to be a logical no-op and "just" allow them to read the bytes that are there. But at the AM level it needs to write back any bytes that were uninitialized, and at the concrete machine it needs to write back at least one byte per page to handle the cases we know about (and even that might not be enough in all cases of overly clever MMU). ↩︎One that's easy to forget is that full "fast-math" style floating-point optimizations essentially requires working on a domain of
MaybeUninit
, since any non-finite values become undefined, and actually proving that the relaxed operations don't introduce any is effectively untenable. So you need afreeze
to return to the standard floating point domain. ↩︎
I like your suggestions. Is there a place for average users to vote / elect a feature of nightly to be stabilized quickly?
Or, how does the rust std libs team get inputs from average users regarding the priorities of pending features? Is the data available somewhere publicly (maybe I missed it) ?
Yeah sorry, that trait was a bit of a thought stopper for me since it can't solve the issue by itself. It's convenience on top of the more fundamental and more powerful freeze which we need first and is way beyond an API addition or finding the right type constructions.
Features are not stabilized on a basis of desire/popularity, but rather when they're done, which means:
- the implementation is not buggy, and
- the API does not contain choices that are likely to be regretted.
Here is documentation on the stabilization process: Stabilizing Features - Rust Compiler Development Guide
Sometimes all that is needed to get a feature stabilized is to make a PR that stabilizes it. Such a PR should include, or be preceded by, a stabilization report, as documented in the guide, which is essentially an argument that the feature is in good enough shape to be stabilized.
If it isn't ready, then the best thing you can do is contribute fixes to make it ready, if there are things to be fixed. If there aren't things to be fixed but there is uncertainty about whether it's a good design or good implementation, then you can help out by writing code that actually uses the feature even though it's unstable — this is an important part of finding bugs and design flaws.
That one is a hard one to swallow as a user of the language. No one wants a future rug pull. And so it creates a chicken and egg problem.
Possibly the best option would be to use the feature internally in Rustc or std itself. Then at least it is the same people doing the rug pulling as being rug pulled.
Semi-tangential but I have thought for a long time that the signature of functions that take an uninitialized buffer and initialize it ought to be idiomatically something like
fn<'a, T> read_into_buffer(&'a mut MaybeUninit<T>, ...) -> &'a [T]
That is, it takes the uninitialized buffer and gives you back a pointer to a subslice of that buffer that is now guaranteed to be initialized. (Lifetimes written out explicitly for expository clarity.) This would eliminate an entire class of easy mistakes.
Rust isn't a popularity contest not a democracy, so the strict answer is no, by design. But that's a bit harsher than merited or true. If there's a "pet" feature you'd like to see move towards stabilization, check the tracking issue for any open linked issues or PRs, then ask on Zulip if there's anything you can help do to unblock progress. Ideally someone familiar with the workflow will see it and respond, pointing you in the right direction. If not, the feature's been around for a meaningful while, doesn't rely on other unstable features, and there don't seem to be any blocking issues, see Stabilizing a library feature in the rustc dev guide.
With BorrowedBuf
, IIRC it's not fully clear that it's the ideal API quite yet, nor the plan for migrating Read
implementors to providing read_buf
in lieu of read
. The added indirection isn't fully negligible, especially for a trait often used as a dyn
object, and the need to explicitly .reborrow()
the cursor isn't pretty either. My personal pet peeve is the inability to transcode between a buffered reader and a buffered writer without using yet another transient buffer between them.
Also, keep in mind that the bandwidth for things across different teams varies wildly and the bandwidth that does exist isn't portable between subdomains.
Yes. It's a hard thing to do, especially when the unstable feature would appear in a library API. But I don't mean that you must do it; I am only saying it is one of the best ways to help that doesn't involve contributing to Rust directly. "People are using this without trouble" looks good in the stabilization report.
It's worth noting that Read::read_buf
originally looked more like that (or well, closer to fn(&mut (&[T], &mut [MaybeUninit<T>]))
) but:
- your signature can't handle initializing one larger slice with multiple smaller read calls that don't fill the slice, suffering from a want for borrow downgrades and not ensuring the filled slice is a prefix; and
- neither signature actually guarantee that the "output" slice is the same as / resliced from the input slice, making trusting it to then access the buffer you provided to be filled a quite subtle footgun loaded up with potential unsoundness (only defusable with a check that panics on such incorrect usage).
The latter reason is why BorrowedCursor
exists as the view into BorrowedBuf
in the current API shape. (I couldn't quickly find the PR/discussion around changing the API, unfortunately, but this is something that Tokio had to deal with for their bytes::BufMut
.)
Note that BorrowedBuf
replaced ReadBuf
, so this is basically the plan stated when closing your socket2
issue.
The most useful things it to give experience reports.
Make a branch of your project that can use nightly, enable the feature, and try it out. Does it do what you want? Was it confusing to use? Were there any sharp corners that surprised you? What was the actual macroscopic performance change for your overall project from using it? Did it integrate well with other things? Was there anything that was harder to do with it than you wanted? Etc.
Does that just mean we actually want
fn read_into_buffer<'a, T>(
buf: &'a mut [MaybeUninit<T>],
) -> (&'a mut [T], &'a mut [MaybeUninit<T>]) {
...
}
This still doesn't guarantee the (prefix, tail)
nature of the returned slices but that can maybe be left to documentation guarantees.
I came to know about BorrowedBuf
only via this thread. (very glad to know). It seems that the original tracking RFC has not updated its main description to include that BorrowedBuf
replaced ReadBuf
. The reference shows up down the page near recent comments. It will be helpful (for non-expert like myself) to clearly list BorrowedBuf
as the current implementation for the RFC in its main description.
Tracking Issue for RFC 2930 (read-buf) · Issue #78485 · rust-lang/rust · GitHub - I've asked on the issue tracker for you.
Note for future reference that Rust Project issues tend to be very accepting of people asking for improvements to the issue title + summary, as long as you're clear on motivation and polite in your phrasing. Don't be scared to comment on an issue where you think you have useful input.
I see how that's an issue and how it motivates a more cursor-like API.
I don't see why this matters, though. The caller shouldn't need to know that! They should "just" use the output slice until they are ready to do another read, and then drop it and refill the original buffer. What am I missing?
In your proposal, how do I reimplement the following to work as intended, without copying from the output slices?:
let f1 = std::fs::File::open(file1)?;
let f2 = std::fs::File::open(file2)?;
let f2_read_end = f1_read_count + f2_read_count;
let mut buf = vec![0; f2_read_end];
f1.read_exact(&mut buf[0..f1_read_count])?;
f2.read_exact(&mut buf[f1_read_count..f2_read_count])?;
buf
This reads from two sources, placing the read data back-to-back in a buffer for later processing - and it should be possible to implement it such that I don't need to copy data at all.
That's @CAD97's first case, initializing a larger buffer with multiple smaller reads. I'm talking about when you don't need to do that.
Unsound `BufWriter` copy_to specialization with the unstable `read_buf` feature · Issue #93305 · rust-lang/rust · GitHub has sample code for that problem, involving a copy between a reader and a buffered writer.
The core of the issue is that you pass the reader a direct reference to the writer's internal buffer, and expect to be able to say "the first N bytes of the writer's internal buffer are now filled". But because the reader can't be relied upon to have used your input buffer (instead, it could - for example - use a shared buffer), this is unsound, since the writer's internal buffer may not have been initialized at all.
So, something like:
struct BufferedDataHandler {
buffer: [MaybeUninit<u8>; BUF_SIZE],
empty_offset: usize,
…
}
pub fn copy<R: Read>(from: R, to: &mut BufferedDataHandler) -> Result<()> {
loop {
let read_count = from.read_into_buffer(&mut to.buffer[empty..]).len();
to.empty_offset += read_count;
to.process_entire_buffer()?;
assert_eq!(to.empty_offset, 0); // to show post-condition of `process_entire_buffer`
}
}
The intention here is to read from the reader directly into the empty space in the output buffer, and then reuse the existing code for BufferedDataHandler
. But because there is no guarantee that the reader used the supplied buffer, I can't do that without first making sure that either the output of read_into_buffer
is the same as the passed buffer, or using a bounce buffer.