The problem is that there is one more dimension to this. While doing less optimization can mean faster compilation, it can also mean much more code is generated, which means slower compilation.
So it's not just a linear spectrum between fast compiles+slow binaries and slow compiles+fast binaries. When the early stages of the compiler generate more straightfoward/naive output, and the program and libraries use lots of extra layers expecting them to be optimized away, that just increases the amount of data the later stages of the compiler have to churn through, generating more function prologues/epilogues and call sequences and shuffling things around in memory.
For a language like Rust where idiomatic code involves this much layering and "zero overhead abstractions," early-stage optimizations like MIR inlining probably should be enabled even in debug builds (though probably also with different tuning than in release builds). Nobody needs
ptr::read on a
u8 to compile to three levels of function calls around
memcpy for any amount of debugging, and producing that output is almost certainly slower than producing a direct inline load.