std
PR #124210 made dropping a File
whose OS-level file descriptor is invalid cause an immediate process abort in debug builds.
In principle, I support this change; however, it has broken a number of test cases for a library I'm working on (not currently public). The problem is that some low-level operations, notably File::metadata()
and libc::fcntl(●, F_GETFL)
, can only fail because of an invalid fd. To exhaustively exercise error handling in a crate that uses these, therefore, requires mock Files that wrap invalid fds. It was formerly possible to construct such a File and have it "work" (i.e. all operations on it would fail, as the test code requires) but now they crash the entire test group.
I would like to start a conversation about how to recover the ability to unit-test error handling for code that calls OS primitives that can only fail when supplied an invalid fd. Possibilities that occur to me include
- a new File constructor,
File::unusable()
, or something like that, that gives you a File on which all I/O is guaranteed to fail, andas_raw_fd()
is guaranteed to give you an invalid fd, but dropping it is fine - alter
debug_assert_fd_is_open
so that if the fd has a particular, documented value (−2 perhaps) that means "I made this File wrap an invalid fd on purpose", it doesn't abort the process; make similar changes throughout the ecosystem (e.g. rustix has several APIs that will abort if passed −2 as an fd). - have someone write a really thorough crate for failure injection on system calls in general, obviating the need to pull tricks like this
I am open to any and all suggestions ... other than "don't bother testing your error handling"
Example of code broken by #124210 (playground link)
use std::os::fd::AsRawFd;
use std::fs::File;
use std::io;
#[allow(dead_code)]
#[derive(Debug)]
struct WrappedFile {
f: File,
// more fields in the real thing
}
fn wrap_read_only_file(f: File) -> io::Result<WrappedFile> {
// SAFETY: trusts you to supply a valid fd, manual error checking
// required. Various other safety issues are not relevant when
// the second argument is F_GETFL.
let fl = unsafe { libc::fcntl(f.as_raw_fd(), libc::F_GETFL) };
if fl == -1 {
return Err(io::Error::last_os_error());
}
if (fl & libc::O_ACCMODE) != libc::O_RDONLY {
return Err(io::Error::from_raw_os_error(libc::EINVAL));
}
Ok(WrappedFile { f })
}
// originally a unit test of wrap_read_only_file's error handling
pub fn main() {
use libc::{c_int, rlimit};
use std::os::fd::FromRawFd;
// The only way to make fcntl(fd, F_GETFL) fail is to supply an
// invalid file descriptor. An fd whose numeric value is larger
// than the process's hard limit on the number of open files is
// guaranteed to be invalid, no matter what the rest of the
// process does.
let f = unsafe {
let mut rl = rlimit {
rlim_cur: 0,
rlim_max: 0,
};
if libc::getrlimit(libc::RLIMIT_NOFILE, &mut rl) < 0 {
panic!(
"getrlimit failed despite valid args: {}",
io::Error::last_os_error()
);
}
// rlim_max is presumably either u32 or u64, and c_int is
// almost certainly i32. Under these assumptions, if the
// system call returns the maximum value for the type (and
// thus .saturating_add(1) returns the value unchanged) then
// the try_into is guaranteed to fail, so one expect() will do
// for both.
let fd: c_int = rl
.rlim_max
.saturating_add(1)
.try_into()
.expect("fake fd ought to be in positive range of c_int");
File::from_raw_fd(fd)
};
let err = wrap_read_only_file(f).unwrap_err();
assert_eq!(err.raw_os_error(), Some(libc::EBADF));
}