Allocator's bin-sizes aware collections capacity resizing?

I was just discussing how memory allocators are usually based on 2^n-sized bins, so they tend to waste memory with every allocation no matter what.

If you play with the allocation size in this example, you'll discover that this particular allocator seems to have 8B of overhead, and then rounds the allocation to the next 2^n.

Which made me think: AFAIR, by default Rust doubles the capacity of a collection like vector. So if I have an T of 2^n size (like most primitives types), and start with 0, or some 2^n (which is most common) it will always unnecessarily sit in the one-up-sized bin, due to that fixed per-allocation overhead, possibly almost doubling the real memory usage for no good reason.

I was wondering - would it make sense to make resizing aware of 2^n-sized bins that most allocators use, and make it try to allocate capacity that in byte terms falls right under the 2^n to improve the memory usage ?

Does this make sense? Is it maybe already implemented somehow?

Some allocators have more complex binning, e.g. adding 2^N + 2^(N-1) buckets to the mix.

But sure, we can try changing the amortized resizing logic of RawVec to look for the nearest power of two instead of doubling and then see what perf.rlo says. Maybe search the PR history first if it has been tried before. We can't change explicit resizing though since the user may want a specific capacity, e.g. to turn a vec into a boxed slice.

1 Like

OK... Gave it a try Grow `RawVec` to fill the allocator bins tighter. by dpc · Pull Request #101341 · rust-lang/rust · GitHub

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.