I think the server aspect of this discussion is under-appreciated. Note that none of the people you interviewed are working on servers. Embedded is not quite the same.
I think there are at least three different types of servers with different memory uses:
- Request-based (think web application): many concurrent requests with individually low memory use. Technically you might be able to isolate the effects of memory allocation failure between individual requests.
- Classic RDBMS: a long-running process that uses as much memory as possible. A process should never die.
- Big Data: a (relatively) short-running process that uses as much memory as possible. A process abort can be handled gracefully but should be avoided as it increases overall computation time.
I can put you in touch with a Spark developer if you’re interested.