Thanks everyone for your replies!
I am surprised to hear that. Doesn't #[global_allocator] work on these platforms as well? What prevent using the same technique with EII?
This is interesting to know indeed. While splitting the definition of io::Error would be complex, splitting the methods could enable something like "manual EII" (where constructing an Error from raw OS error would require providing the necessary functions).
I don't think so. Adding a supertrait to Read would be a breaking change, and blanket impls from GenericRead<io::Error> to Read would be complex with the exiting impl<R> Read for &mut R. I think that it would require something like specialization lattice?
Also code using these traits often rely on the fact that they can create a custom io::Error. Just in std, you have instances of this in Read::read_exact, Read::read_to_string, Write::write_all, plus OOM handling in Read::read_to_end (and probably more). And it would be nice to keep having these specific ones constants.
Additionally, I fear that creating new traits would split the ecosystem.
Overall, I think that doing the work to migrate the existing traits is easier and better than working around it.
Note that this is the impl block of the type dyn std::error::Error that is split, not the definition of the trait. That is why my proposal is only to migrate to alloc for now.