Currently, running out of memory always aborts.
However, there are processes that need to survive OOM – as this thread states, kernels and microkernel servers are two examples. In these applications, one probably does not want any from of unwinding at all – it requires custom code and is not very predictable. Furthermore, such code often has soft (or even hard) real-time requirements, as well as the need to minimize fragmentation. Therefore, it often uses custom allocators instead of providing an implementation of the standard Rust allocator.
But there are other examples of code that need to survive OOM, or at least run cleanup code. The main cases I can think of are:
- a program that needs to persist some state to disk on OOM (before exiting) – perhaps to roll back a file format (such as a .docx or a .odf) that cannot be updated atomically, and which would be left in a corrupt state, though a better solution is OS-provided atomic renames.
- a program that stores large amounts of data in caches, and on OOM can drop some or all of these caches and retry the allocation. Such programs often run close to the limits of available memory anyway (for performance), and are designed to run with paging and overcommit disabled.
Do these provide a sufficiently compelling reason to be able to unwind from OOM, or at least run some cleanup hooks?