Would it be useful to have a trait Communicate in std::io?
I am currently working on a protocol, that is transaction-based thread.
Maybe it would be helpful for transaction-based I/O where it is not feasible to implement Read and Write separately to have a standard library trait that covers such "atomic" I/O transactions.
We could even blanket implement it for implementors of Read and Write if that makes sense:
use std::io::Read;
use std::io::Write;
pub trait Communicate {
fn communicate(&mut self, request: &[u8]) -> std::io::Result<impl AsRef<[u8]>>;
}
impl<T> Communicate for T
where
T: Read + Write,
{
fn communicate(&mut self, request: &[u8]) -> std::io::Result<impl AsRef<[u8]>> {
self.write_all(request)?;
self.flush()?;
let mut buffer = Vec::new();
self.read_to_end(&mut buffer)?;
Ok(buffer)
}
}
I'm not convinced that Communicate as presented is a useful abstraction; it combines sending a request with receiving a response (which means it's not useful if your protocol has unsolicited "responses", and it makes things awkward to implement if your protocol supports tagged queueing), and it works at the level of a block of bytes, instead of a protocol item.
When I think about the cases I could use such a thing for, I find that I have at least three axes I could abstract over:
Pairing up of requests with their associated replies. This should handle both the simple case of "send a request, next incoming packet is the reply", and the more complicated case of "send a tagged request, reply comes in with the appropriate tag", as well as unsolicited data (an incoming packet not associated with any request).
The ability to convert between streams or blocks of bytes, and higher level items; this is basically tokio_util::codec, which uses separate Decoder and Encoder traits to convert between items and bytes.
Working in packets (possibly atomically, possibly not). A PacketWrite and PacketRead pair of traits that guarantee to maintain block boundaries, or fail, could have their uses, as could an AtomicWrite trait that guarantees that certain sizes of Write or PacketWrite are all-or-nothing (which would need to be tied into OS-level support for things like - for example - atomic writes to NVMe storage).
Which of these axes are you trying to work across?
The protocol itself is third-party and I implemented a tranceiver that runs in a separate thread and does the I/O on the serial port including the handling of so-called callbacks (i.e. unsolicited responses) 1.
The host with the communicate() interface is responsible for handling "atomic" send-and-reply transactions for which I (think I) need that interface and communicates with the transceiver via channels.
That protocol document makes me think in terms of option 2, not option 1. You don't have a useful lower-level concept of "packet boundary", and you do actually have the same needs as a tagged protocol, since there's no guarantee that a given DATA frame is a reply to the most recently sent frame (and, indeed, if you're pipelining as per that document, it often won't be).
If I were working with that device, I'd use tokio_util::codec to allow me to have an enum SentFrame and enum ReceivedFrame that reflect the data going back and forth in a structured fashion, and a way to encode and decode that into byte streams for the serial port.
Indeed, it is a challenge to group packets to logical responses. I currently use the protocol's ability to suppress callbacks to avoid them mixing in with responses during a transaction.
Also a response might be fragmented into multiple response frames, which really does make tokio_util::codec look really interesting.