Check disk space before compilation

There is a compiler bug (ICE) - Github issue references: #115542 and #115298 #114671 #114045 #108100 #106787 - that occurs when the disk is full. This bug is either fixed or will be fixed soon with a more user-friendly error message indicating that the disk is full. However, I propose that the compiler/cargo itself could estimate the required disk space BEFORE starting any compilation process and fail with an error message such as:

"Insufficient disk space to compile the necessary files. To attempt and force compilation anyway, use --force-low-disk-space."

At the end of each compilation, it could also issue a warning indicating that disk space is running low. This way, the user can proactively resolve the issue before hitting the limit, something like:

"WARN: Disk space is running low, which may affect future compilations. Free up more disk space as soon as possible."

I've also suggested this at: Compiler panics when disk is full #115298 .

How is the compiler supposed to know how much space it will need?

Estimate.

What do you propose is used to estimate the amount of free space available? Conflations that should be considered:

  • ccache or sccache may store information on a separate partition or duplicate data on the same partition
  • target/ may be on a separate partition than the source tree (or the Cargo cache)
  • Are download sizes counted? What about their extracted sizes?
  • NFS can be weird with remaining space in my experience (e.g., mounting two shares backed by the same partition report different free space calculations)
  • Deduplicating filesystems may be able to "store" more than is "free" if it can be deduped
  • Compression in the filesystem can also confuse things

I feel like this is prone to a lot of false positives and false negatives. I feel like an ICE (even with a delayed error message improvement) is better than putting a lot of work into a poor heuristic where the best course of action is still the same.

6 Likes

Another argument to add to the list against this heuristic is that you shouldn't be running at close to full on your disks anyway:

Most file systems perform better when they are not too close to full. Thus was even more true back in the day on spinning disks where fragmentation was a big deal, but as far as I know it is still true to some extent.

As such it seems unlikely to come up in practise very often. And as long as it doesn't cause misscompiles and has a reasonable error message (e.g. "out of space" or even "failed to create/write file") I don't really think that this is worth the effort.

1 Like

How? Figuring out how much disk space will be used is very complicated and in no way solved by a one-word answer. Have you considered the intricacies mentioned? How about the fact that someone may be in a VM where disk space can magically appear when it's running out?

Why should the compiler even do that?

Very few applications check disk space in advance, not even cp does and you'd think its disk space impact would be easy to predict (Narrator: It isn't).

The most reliable way to test whether something is possible (such as opening a file) is not to do pre-checks. It's simply to execute the operation and see if it goes through. Writing files to disk is similar.

I'll note that this list of complications is far from exhaustive.

I didn't say it would be easy, but it's possible to make an estimate that works for 99% of the cases. Indeed, I'm not sure if it's worth the effort; it's really not a common occurrence.

Due to my oversight, I was using a small partition and suddenly encountered that ICE, which wasn't so obvious to resolve. At the same moment, I thought: "Well, why didn't it warn me before starting the compilation?" or "Why didn't I receive any warning about this?" - when the error occurred, it was already too late and often disk space shortage causes numerous other issues in the operating system. I really wasn't aware of what was going on and found out in the worst possible way, with a compiler message: "This is a bug, please report it.".

Not only me found the same issue. But maybe with the ICE fix with a better message would be enough.

On a general note: this is a hard problem, yet precisely the kind of problem that a system language as Rust should be capable of solving incrementally from the bottom up for some portions. Gaining better fallible allocator support and analogues for filesystem interaction would be a neat byproduct, probably.


There's quite a lot of details involved in getting accurate resource requirements up-front. Many steps will require dynamic amounts of memory, for instance, when constructing lifetime dependency graphs, calculating connected components or the llvm steps. Writing to disk is probably the last of those to happen and disk space only one resource to consider here.

I think the compiler never should fail when its up-front approximation of resources exceeds the available space. That estimate could just as well be too large as too small, inserting unecessary constraints into the compilation process on systems that could handle it just fine. It may be interesting to investigate if there are low-hanging fruits in the form of lower bounds instead, but realistically, not so easy. A general warning would be more interesting, and still I'm a little doubtful how well and tight it can be tuned. There's a lot of steps with data-dependent resource requirement. Evaluating the resource requirements can't involve executing the actual stage—too much of a sequentialized slowdown—, so heuristics and meta-data computation it is at best. Any form of filtering the steps, elimination of duplicates, dynamic programming and memoization hitting all the caches, etc. involve lots of data structures with gaps between asymptotic worst-case costs, the mean cost and the one your code will get. So which of these costs should be used to issue the upfront warning?

4 Likes

I was thinking of a way to avoid larger issues such as data corruption, loss of unversioned or unbacked source code, sudden system crashes, among others. The idea wasn't to create something perfect that could predict down to a single bit, but rather a very rough approximation that could give the user a chance to react in time. It might not even be an appropriate function for the compiler. Given the challenges and the limited benefit relative to the potential cost, I think we can close the discussion.

rustc crashing due to out of disk space is not really a reliability/resilience issue. Yes, it should have a nicer error message. But the work it does is not precious. You can free up space and restart compilation.

Other programs crashing and losing data is a reliability/resilience bug in those programs. E.g. a text editor shouldn't crash and lose edits, it should instead display an error and allow you to try again once you've made some space. This is not something for rustc to solve.

Alternatively this is a sysadmin issue. E.g. you can use quotas or separate filesystems to separate critical components from more transient things that are allowed to fail. Or you can setup monitoring. On the simplest levels many desktop systems have status bars that can display various system parameters (battery, storage, network connectivity) and start displaying them in red when crossing some threshold.

5 Likes

It still seems a bit strange to me that abrupt failure is better than some prior (even rough) estimation. Imagine a system that takes hours to compile, and only at the end does the user receive a message, risking a system crash and a difficult recovery state. Suppose each new compilation takes up 50 MB, and the system only has 10 MB available. Why start when it's obvious it will fail? If the last compilation took 50 MB, why would a new one with the same parameters be any different?

So far, many arguments in this thread have been weak, but there was one that was really good: is this the compiler's job? Other similar applications don't perform any type of catastrophe prevention, like basic commands such as "cp". My view is that the compiler doesn't quite fit into this category of "a basic system." However, the consequence of running out of unexpected space is not as severe in Rust's case because it operates in a "read-only" mode— in other words, it probably won't damage the source code unless the system has some other type of collateral failure.

In summary, it seems that it really isn't the compiler's role to perform this kind of prevention, although it could be a feature that saves some headaches like: wasted time or system instability. In my specific case, I accidentally started compiling the project on a small partition and encountered this internal compiler error. I remember afterward that I couldn't even list files due to lack of disk space. It couldn't create basic temporary files, and a restart was necessary.

1 Like

BTW, I'm shocked how much the target dir balloons. It turns kilobytes of source code into gigabytes of temporary files.

Has anyone been looking into what is actually taking up so much space? Maybe there's some low-hanging fruit that could reduce disk usage, which also would make running out of disk space less likely.

9 Likes

Are you on some very limited low-end system? You talk about hours of compilation time, only having 10MB available, directories becoming non-enumerable when the disk is full (this doesn't seem normal!).

I'd say rustc isn't optimized for such systems, it's fortunate if it can be made to work under such circumstances at all.

In my specific case, it was indeed an exception. I was working on a relatively small partition (128 GB) and my project generates huge videos, so the space ran out quickly without me noticing. This is not common. A clearer error message would solve the problem. Would I be happy with a prior warning? Yes, I would. But it's far from being a critical issue.

It's mostly filled up by rlibs from dependencies and the incremental cache. The incremental cache is mostly made up of all the cached final object files from every codegen unit.

It may not actually fail even though you think it would fail. Whether because someone freed up space while rustc is busy compiling, or because the filesystem automatically grows when it gets filled or compresses your data or even deduplicates it in real time.

On RTOSes it is possible to precisely account for all resources. This is not possible on desktop OSes and server OSes. Those deliberately overprovision resources to increase efficiency. They let processes consume 100% of the cpu when nothing else is running even if another process may come around and consume cpu time too, thus preventing the first process from using 100%. An RTOS would force all processes to consume no more than a fixed amount of cpu where everything adds up to 100%. Similarly you may overprovision a filesystem (or in case of NVMe disks overprovision namspaces aka logical disks) and allow all processes to consume 100% of it and use less space again later. This means there is no way to know exactly how much resources you will be able to get until you try.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.