I was just discussing how memory allocators are usually based on 2^n-sized bins, so they tend to waste memory with every allocation no matter what.
If you play with the allocation size in this example, you'll discover that this particular allocator seems to have 8B of overhead, and then rounds the allocation to the next 2^n.
Which made me think: AFAIR, by default Rust doubles the capacity of a collection like vector. So if I have an
T of 2^n size (like most primitives types), and start with 0, or some 2^n (which is most common) it will always unnecessarily sit in the one-up-sized bin, due to that fixed per-allocation overhead, possibly almost doubling the real memory usage for no good reason.
I was wondering - would it make sense to make resizing aware of 2^n-sized bins that most allocators use, and make it try to allocate capacity that in byte terms falls right under the 2^n to improve the memory usage ?
Does this make sense? Is it maybe already implemented somehow?