Here, I disagree, partially because “multi-process system” is rather subtle to define, and partially because there are ways to meaningfully deal with “I, personally, can no longer expect memory allocation to succeed.”
The first point can be illustrated with memory cgroups: I may choose to run a Rust-based program as a service under systemd with a harsh memory limit, or even perhaps a generous one but a swap limit of zero. At this point, it’s basically a single-process “system” as far as memory is concerned - the only thing that can make it fail an allocation is that the process itself already used up what’s available.
The second point is really down to what was mentioned at the very beginning: The difference between hard termination and clean shutdown. If we presume the OOM killer is not an issue (either by platform or configuration), then even if the lack of free memory is some other process’ fault, it may be beneficial to the end user for the process that experienced the failure to tidy up its desk, pack up its toys, and turn the lights out on the way down.
I’ve said elsewhere that there are, in the end, only three ways of handling OOM:
- Retry
- Lose something nonessential
- Die
All three may be valid - and in fact, some (such as Retry) are more valid in the multiprocess setting than the single-process setting (someone else might back down).
And “Die” doesn’t always mean “fall over” - the process may well be able to put its affairs in order first.