Confusing doc or implementation on write_vectored

Here is the doc:

Like write, except that it writes from a slice of buffers.

This scares me quite a bit. As write may not write all the data in the buffer. I have to use write_all to do automatic retry, But my impression is: write_vectored will write all the data in one shot, i.e., atomically! i.e., no need to retry,

So is the DOC wrong, or my understanding with write_vectored is wrong?

But my impression is: write_vectored will write all the data in one shot, i.e., atomically! i.e., no need to retry

On what is that impression founded?

first, examples, all the example I found is call write_vectored, and call it done. second, docs: Write::write vs. Write::write_vectored - help - The Rust Programming Language Forum ( The main difference I can find:

The data transfers performed by readv () and writev () are atomic: the data written by writev () is written as a single block that is not intermingled with output from writes in other processes

#[tokio::main] async fn main() -> io::Result<()> { let mut file = File::create("foo.txt").await?;

let bufs: &[_] = &[
    IoSlice::new(b" "),




The code example you cite there is an async write trait. It might have different semantics compared to the synchronous API, so you shouldn't draw conclusions for one from the other.

From the preadv manpage


  On  success, readv(), preadv(), and preadv2() return the number of bytes read; writev(), pwritev(), and pwritev2() return the number of
  bytes written.

  Note that it is not an error for a successful call to transfer fewer bytes than requested (see read(2) and write(2)).

The atomicity only covers the number of bytes that do end up getting read. E.g. if you pass two buffers, 2 bytes each and the file contains XXYZZ then then the guaranteees mean that XX, Y is ok, "XX", "Z" isn't. In other words it means the buffers aren't turned into separate reads.

Note that both the Tokio and std versions of write_vectored return the number of bytes actually written in the event of successfully writing anything; your code is discarding this return value. If I change the write_vectored line in your code to:


then in the case where everything is written, it'll print 11. But it's entirely acceptable, per the contract for write_vectored for it to just write "h" to foo.txt, and print 1. It could also write "hello " and print 6, or even "hello w" and print 7. What it cannot do is write "h w" and print 3 - it has to write the buffers in order, and fully finish with one buffer before moving onto the next one - so a return value of Ok(3) means that it wrote "hel" to the file, not "h w".

In my opinion, the api is pretty much useless in its current shape. As the retry logic would be quite complicated. A write_vectored_all is much more needed than write_all. As I can imagine write_vectored_all would be much more complicated than write_all. Yet, we have write_all, but not write_vectored_all, which is weird, from the lib design point of view.

Note that write_all_vectored exists, but is experimental (because the API for it is hard to design well - see also IoSlice::advance_slices which has the same problem). write_vectored is the lower-level API that directly reflects what the OS provides; it's thus useful in the same way that write is useful even though write_all exists.

Thanks, I thought these API is already matured, that's why I am questioning he design. personally I feel "write", "write_vectored" should be renamed to "write_internal", "write_vectored_internal". As it's too easy for people to call write, and exit, forgot to retry. it's the source of bugs. Who the hell will think of write will not actually write everything you give to it?

Just here to remind you linux OS write api has same semantic.

OS api is different. They probably don't worry about write_all. They are facing lib authors, not end developers.

This is the API you get for writing in Python, C, C++, Java and other languages - you give it a buffer to write, and you get told how many bytes were written, which is guaranteed to be less than or equal to the number of bytes you asked it to write. They all just wrap the OS API in their language-specific calling convention.

.NET is the exception here, since it always guarantees the write_all behaviour. I'd agree that this is more user-friendly, but getting the API right in the absence of a garbage collector is also very hard, since you want to be able to free memory for buffers that have been written if that buffer will not be reused, but you also want to keep buffers in memory if something's going to reuse it.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.