MutexGuard poisons itself if a panic happens while it is held. I fully support this for safety. Likewise, fs::File should be safe, particularly for writes.
there should be a .close() that returns a Result, and normal lint warnings should ensure it’s checked with a try! (or equivalent)
if somone forgets to explicitly call .close(), and the implicit close() called from drop fails, and no existing panic is active, it should panic.
Not checking the return value of close() is a common but nevertheless
serious programming error. It is quite possible that errors on a
previous write(2) operation are first reported at the final close().
Not checking the return value when closing the file may lead to
silent loss of data. This can especially be observed with NFS and
with disk quota.
As a small aside, this is where something like chained / suppressed panics could be useful. If there is an active panic, and the implicit File close also failed, one could add to the active panic a message “oh crap, this thing failed too” and both could be logged by whatever logs the panic. That’s the thing about errors: when it rains, it pours.
Note that fs::file is doing an hod-hoc println as ‘logging’ (to stdout); adding a message instead to the active panic seems better.
Linux’s position is very awkwardly stated in the man page:
Note that the return value should only be used for
diagnostics. In particular close() should not be retried after an
EINTR since this may cause a reused descriptor from another thread to
be closed.
That actually means “Even on EINTR, the file descriptor is closed”. On Linux at least. HP-UX is NOT the same.
One workaround is to block SIGINT temporarily when doing the close(). That sucks for performance and CTRL-C purposes, so IMHO that should only be done if you have no clue what the platform you are on does with EINTR.
AIX appears to have the linux behavior. Not sure about BSD/Mac.
The fact is, what Linux does and has always done is the only reasonable
thing to do: the close will tear down the FD, and the error value is
nothing but a warning to the application that there may still be IO
pending (or there may have been failed IO) on the file that the (now
closed) descriptor pointed to.
The application may want to take evasive action (ie try to write the
file again, make a backup, or just warn the user), but the file
descriptor is gone.
python doesn’t seem to care about HP-UX. There’s no retry loop on EINTR, it leaves the fd open and will never be closed.
python has a wierd _PyVerify_fd function which, on windows, stops the CRT from asserting on a bad fd. http://svn.python.org/projects/python/trunk/Include/fileobject.h This is because python lets the user pick any non-legal fd, which I don’t think is a good idea (see next point)
python doesn’t care about EBADF being something seriously wrong. I disagree and strongly feel that fs::File (and more generally, sockets and anything else that holds an fd) should maintain the invariant that it is the only owner of a legal fd, and if that contract is violated, serious corruption in the application or kernel has occurred, and abort!() should happen. In fact, getting an EBADF is lucky, because it’s telling you a double free tried to occur - you could have instead picked some other threads legal fd to close. Not aborting on EBADF is like not aborting if free() detected your pointer was garbage.