This one's interesting. The core allocation functions, like Vec::push and Vec::append, document panics on capacity overflow (i.e. when you have more than usize::MAX zero-sized types in a Vec), but do not document panics on memory exhaustion, and I think that the reason is that memory exhaustion is not actually a panic (it has similar effects in practice but is implemented differently). See handle_alloc_error for more information – it looks like the current rules are to abort if using std and panic if #[no_std], which implies that nothing in std panics on memory exhaustion.
There's also the fact that on some (most?) operating systems, programs don't get denied allocations if the system is out of memory; they just get killed heuristically instead (meaning that it can't be handled from inside the program). This makes sense if you think about it: when the system is low on memory, it is in practice almost always because one program is leaking memory quickly. In that situation, denying allocations based on memory pressure would mean that other processes that try to allocate memory when the system is near the limit would end up failing allocations and maybe choosing to abort as a result, so you end up with possibly some unrelated processes dying (even if they're hardly using any memory at the time and werent' responsible for the memory shortage). Having the operating system detect which process is at fault and kill that process specifially generally causes less damage to the system as a whole, but it means that you don't have any out-of-memory error you can handle. (Out-of-memory indications do happen in practice, but only when you're using per-program memory quotas: a program can reasonably get an out-of-memory signal if it hits its personal limit for how much memory it can use, even if the system as a whole still has memory spare.)
It's also worth noting that memory limits can be hit when trying to grow the stack, meaning that you can get an alloc error as a consequence of any function call at all, even one that claims to handle alloc errors, unless you calculate the amount of stack you need in advance and prefault it.
So it makes sense for memory exhaustion to be handled as a special case: it's almost impossible for programs to handle all possible cases of it, and normally the operating system would make better decisions anyway.
(Note that a possible improved OS design involves telling a program the system is out of memory "early" in cases where the program appears to be at fault for the memory exhaustion, and earlier for allocation requests than for stack growth – a scheme like that would allow alloc-error panics to work reliably. I don't believe any OSes work like that at present, but maybe it would make sense to design Rust in anticipation of a world where they might exist.)