Optimizing fs::copy for the identical content edge-case on CoW filesystems


#55909 will add optimizations fs::copy for sparse files on linux.

I see some further room for optimization potential for the special case where one installs updates of something and the tool just blindly copies new files over the old ones even though many of the old ones may remain unchanged. On a filesystem with snapshots this breaks the reflinks between snapshots.

fs::copy could be optimized to detect this and attempt to either just skip the copying or do a reverse FIDEDUPERANGE but would come at the cost of increased complexity and trying to open the target file as rw instead of w and defer the truncation until it has been checked whether the file is duplicate. The API provides some leeway here, but if we want to retain the current behavior it would require a fallback to write-only.

As an added benefit the delayed truncation and identity checks would also remove the footgun of deleting data when a file is copied into itself (at least on linux).

Is this something the standard library should do?


Interesting thoughts. I think this could be a useful feature, but not for std. This actually sounds like the basis of a packaging framework.

The important thing about this that it really requires application-level context; i.e. the standard library has no way of knowing that the file being copied is an update and not a full replacement. For std::copy() to detect this it would need to checksum the old and new files, which would be a lot of overhead for every call. However if the application tracks changes it could keep track of which files have changed between versions and skip non-updated files. (An additional optimisation would be to track files that moved and only move and not replace the data.)

I notice you work on fastar; that might be good level to implement something like this. While the general move is away from update-in-place semantics for package-management (e.g. nix), there will always be the more traditional “just unpack it on the server” model for applications.


Not necessarily. Heuristics can be used to limit this to special cases (same device, size) and an identity check would not have to checksum the whole file at all, you can do a byte-by-byte comparison and bail out fast on first mismatch. In fact this is what FIDEDUPERANGE does in the kernel, so we don’t even have to do this in userspace.

Asymptotically you do not need to touch more bytes than a regular copy would. Either it’s duplicate and you do 1 read on the in-file and 1 read on the outfile or it’s not duplicate and you do 1 read, 1 write.

I think a rough outline of the algorithm would be as follows:

  1. to open target file for read-write without O_TRUNC; read-only fallback in case of EACCESS
  2. stat; we already do this
  3. check same-device and same-length; bail otherwise
  4. get FIEMAP and check FIEMAP_EXTENT_SHARED flags; bail if target is not shared as we only want to avoid reflink breaking
  5. attempt FIDEDUPERANGE on first extent, bail on FILE_DEDUPE_RANGE_DIFFERS

on bailout: ftruncate1 tail of file as we would have done on open anyway.

positive outcome: 2x read, no data writes, disk space saved negative outcome: 2 extra syscalls and reading 1 extra block, fallback to normal copy most common outcome: 1 extra syscall (the truncate) as the preconditions are not fulfilled

1: This delayed truncate after stat also avoids the footgun of copying a file into itself because we can check whether device id and inode numbers match and bail out if that’s the case.


But yeah, if std does not want a highly-optimized-but-complex copy implementation then a fs_copy crate will make sense to cover more advanced needs.