Is custom allocators the right abstraction?

This is similar to an idea I had but never fleshed out about owned allocators. We could have a slab allocator embedded into the Vec itself, and work really well for some use cases. Now of course this still wouldn't be as nice or performant as a dedicated SmallVec or this idea, but it has some cool potential. (Of course, self borrows rears its ugly head)

This abstraction really fits well for a single allocation container (basically anything contiguous). Otherwise, you are pretty much reimplementing a slab allocator, with all of the fun that entails.

1 Like

I like to add another advantage: A can be ?Sized, as it's not stored.

2 Likes

I like to remind, that the AllocRef trait is by far not set in stone and every method or even generic parameter is gated behind #[feature(allocator_api)] .

Yes, this is why I thought important to mention it as soon as possible, when things are still in flux :slight_smile:

I don't really see, how InlineVec solves this problem. Using your provided implementation, this could look like this:

I've amended the Poc.

I just added (on top of a few feature toggles):

unsafe impl<T, const N: usize> ContiguousStorage<T> for [mem::MaybeUninit<T>; N] {
    type CapacityType = usize;

    fn new(size: usize) -> Result<Self, ()> {
        if size <= N {
            Ok(mem::MaybeUninit::uninit_array())
        } else {
            Err(())   
        }
    }

    fn storage(&self) -> &[mem::MaybeUninit<T>] { &*self }

    fn storage_mut(&mut self) -> &mut [mem::MaybeUninit<T>] { &mut *self }

    fn try_resize(&mut self, new_size: usize) -> Result<(), ()> {
        if new_size <= N {
            Ok(())
        } else {
            Err(())   
        }
    }
}

And instantly I can do:

fn show_off<S: ContiguousStorage<i32>>(storage: S) {
    let mut vec = Vec::with_storage(storage);

    vec.push(1);
    vec.push(2);

    println!("{:?}", vec.as_slice());

    let element = vec.pop();
    println!("{:?}, {:?}", element, vec.get(1));
}

fn main() {
    show_off(HeapStorage::new(0).unwrap());
    show_off([mem::MaybeUninit::<i32>::uninit(); 2]);
}

Note that the important difference with the Allocator API is that my definition of Vec never stores a pointer. Instead, anytime it needs a pointer to the storage it asks the storage.

This is the critical difference that allows moving the Vec (and its storage) around without issues.

When retrieving a pointer to the data of InlineVec , it would also be invalidated after moving the vector. Would you simply say, that dereferencing a pointer is unsafe , so the user has to care?

How is it different from the current interface? Dereferencing a pointer is always unsafe.

I like to invert your statement: What is not possible with Storage , but is possible with AllocRef when using in collections and smart pointers? (I think nothing, as AllocatorStorage could be a wrapper around AllocRef )

I think indeed that Storage is strictly more generic, and I'll continue hacking on the PoC today to see where it leads me. I just want to avoid eliciting too much and letting everyone down if I overlooked something :wink:

I'm not sure what "inline storage" means but with C++'s allocator model, it is possible to have an allocator that pulls all of its storage from a stack-allocated buffer.

A stack-allocator buffer is indeed possible in C++, however this is restricted.

If I build a std::map<std::basic_string<char, stack_allocator>, V>, what is the size of the stack_allocator that I need? This will depend on the map.

If instead I build a std::map<inline_string<24>, V>, then the storage of each string is contained within the string itself.

Inline storage is more flexible than stack storage.

Which is why I strongly believe that the C++ allocator model is NOT necessarily the right abstraction, here, since it doesn't handle storing the state within the allocator itself.

However, on the topic of learning from C++ I find it absolutely bewildering that the approach would publicly denote the type which it will internally allocate.

Indeed.

I touched briefly on it when mentioning the storage of heterogeneous. I think the storage trait should be independent of the elements, I just haven't found a good (easy to use) way to express it yet.

Do you want to add the full topic there or do you want me to open an issue with a short description and a link?

I'm not familiar with the way you prefer to work.

The idea is still very rough. I'm relatively confident it can work well for contiguous storage, but quite less so for non-contiguous storage such as needed by BTreeMap.

I'll let you judge the right time (maturity?) to open the issue.

4 Likes

Ah sure. That also would replace the strange Vec::into_raw_with_alloc. This would just be Vec::storage.

Nice to hear! I'm very triggered by this idea and also already started diving into it. A few notes on your implementation:

  • Yes, Capacity is cursed, but I think this can be fixed more or less with try_capacity_from or similar. Accessing the storage without boundary checking should simply use as instead of TryInto. It also only have to be implemented for u8 ..= u64 and usize I think, so we don't need the generic implementation using TryFrom and TryInto. With a bit of effort, this can be implemented with macros depending on the size of usize and we're good. With this in mind, the trait could even be sealed, but this doesn't matter right now.
  • HeapStorage<T> is pretty unflexible. A ContinuousStorage don't have to care, where its memory comes from, so why not abstracting Heap away and use AllocatorStorage<T, A: AllocRef> instead (or whatever we come up with for Box<T, ???>)? Or do you want to ban AllocRef in general?
    pub struct AllocatorStorage<T, A: AllocRef = Global>(Box<[MaybeUninit<T>], A>);
    
  • I like the implementation for ContiguousStorage<T> for [mem::MaybeUninit<T>; N]. I came up with BuildContiguousStorage<T>: ContiguousStorage<T> but keeping the number of traits small is better I think.
  • (Nitpick: for compute_minimum_capacity better use next_power_of_two: https://godbolt.org/z/oq9oTE)

Nice work!

1 Like

HeapStorage<T> is pretty unflexible.

It should definitely be AllocatorStorage<T, A: AllocRef>, I just haven't gotten around to it yet.

Here's the updated PoC.

(Nitpick: for compute_minimum_capacity better use next_power_of_two : https://godbolt.org/z/oq9oTE)

Dang, I hadn't even noticed this was defined in std. Rust spoils me :slight_smile:


Regarding ContiguousStorage<T>, I tried making it untyped, to be usable for HashMap if it wants use a single allocation for both some prefix and then an array of entries, and this didn't go well.

I'm thinking that it may better to shun the attempt at genericity, and simply create a Storage trait for each collection that needs it: BoxStorage, VecStorage<T>, VecDequeStorage<T>, HashMapStorage, ...

(Creating a ContiguousStorage from a Layout is easy enough, however the capacity is then stored in number of bytes, not number of items, and that means a division -- by a constant, but still -- to actually retrieve the capacity in number of items)

In case it got lost at the top, generic-vec is exactly this, when applied to Vec. I started working on it early last week, and have gotten mostly up to parity with Vec. generic_vec::RawVec is equivalent to Storage, and I have a slice, array, and heap based storages implemented.

I have found that there is nothing special that an AllocatorStorage (generic_vec::Heap) can do that can't be done by Storage (generic_vec::RawVec) in general. Exactly because generic_vec::Heap can implement generic_vec::RawVec.

generic_vec::Heap can use AllocRef on nightly, and the entire crate can be used on no_std, with no loss in functionality besides a growable vector (no heap). So if you have an upper bound on you're vectors then you can use generic-vec in no_std mode with no issues. (Note: generic-vec also includes some of the nightly features of Vec, like drain_filter and advanced iterators like splice, all generic over the storage strategy)

6 Likes

Amazing tour de force :clap: Maybe in my next lifetime I'll be able to create something like that in just one week. It's a brilliant demonstration of what can be done in/with Rust.

1 Like

A large part of this crate was stolen from std, so major props to the std. Although I'm most proud of unifying drain, drain_filter and splice as thin wrappers around a raw_drain, which does all the heavy lifting, and can expose a nice (but unsafe api). Which is slightly different from std, they base everything off of Drain, which doesn't ecpose a nice unsafe api.

1 Like

It seems like this feature could work interestingly with https://github.com/rust-lang/rust/pull/69985 and the ArrayVec equivalent mentioned in https://github.com/rust-lang/rust/pull/75717.

let items: [_; 20] = iterator.collect::<Result<_, Vec<_, [MaybeUninit<_>; 20]>>>()?;
1 Like

I definitely missed, it. I'll link to it from the initial post to give it more visibility.

2 Likes

I'm currently trying to combine the storage approach and the approach from @matklad. I make good progress. Basically I have two types: Box and RawBox. RawBox holds the storage and Box holds an allocator (if any) and the raw box. Box always acts like before: no allocator has to be injected, but the raw box depends on whether the storage needs an allocator or not. (I'm currently working on Box but this is basically the same as Vec.). With this approach, it's optional to use an allocator, but the user don't have to care. He could simply throw in an AllocatorStorage or an array and won't notice a difference. But if the user care, he can use the underlying raw box and don't have to store the allocator next to the box.

You can take a look here: https://github.com/TimDiekmann/storages.

I don't know, if Storage::Item has any uses, but I don't think it's needed.

I will continue working on this later and tomorrow as I'm really excited about this topic :slight_smile:

4 Likes

I will continue working on this later and tomorrow as I'm really excited about this topic

I'm glad I could tickle your fancy :wink:

It's not clear to me what the benefit of not embedding the allocator in the RawBox is.

There's one crucial difference between the two: Vec can re-allocate, whereas Box has a storage that's created at the beginning and never touched.

I think it's easier to have the Storage contain the Allocator for re-allocation; this way it's completely delegated.

Another nasty surprise of reallocation is that the elements have to move. For Vec it's (relatively) easy, a simple realloc does the trick, for VecDeque or HashMap it's a tad more complicated: just because there's N elements doesn't meant you can just copy the N * size_of::<T>() bytes of the storage.

I've just finished a C++ implementation of VecDeque with an abstracted storage for work today, and in the end, the simplest solution was to pass closure to the Storage::resize method which takes 2 arguments (&mut [MaybeUninit<T>] source, &mut [MaybeUninit<T>] destination) and leave it up to the data-structure to perform the custom logic involved in moving the elements. Of course C++ has it worse than Rust (moves are not always bitwise), so in Rust it may be possible to create a new Storage and swap them, however an in-place resize minimizes the amount of memory moves so it's pretty nice too.


Quick note, in https://github.com/TimDiekmann/storages/blob/main/src/storage/array.rs I noticed that you implemented storage for [T; N], I think it should be [MaybeUninit<T>; N].

This should be fine for Copy types, but for non-Copy types you should definitely use MaybeUninit

When one allocator is used in many places, you have to store a copy of it in every struct, which is involved. When providing the allocator as reference whenever it is needed, you can store the allocator in one place. Box is then just a dealloc-guard for RawBox. With Vec it also provides the allocator to RawVec for reallocating.

Until now the storages are the same, I'm trying to achieve an elegant trait-hierarchy, which is easy to use, safe and flexible. For my current goal, it's easier not to mess around with reallocation, but I will definitely go that way later.

Currently I'm experimenting with the method Box::from_storage and it would be great, if it doesn't need to be unsafe. When adding a storage, which may returns uninitialized data (like an allocator based storage), you need to call assume_init before you can use it. It would be nice, when adding a storage with initialized data, that the box can be used immediately. I don't know if it's worth, though.

That's a nice and flexible solution! Maybe this could also be used for allocation? This could possibly also solve my above problem.

... let ... me ... try ... this ... out ... :upside_down_face:

At the state of the current main branch, this is true. If I can achieve the above, this would be fine.

@matthieum I like the direction of this. I tried to write a similar PoC but I got bogged down a little bit with making all the other things besides Vec work nicely with this. I need to revisit this. Let's start an RFC?

There's already the Allocator WG, so I don't think an RFC is really needed here - just do some exploration for the WG first without worrying about all the bureaucracy.

3 Likes

And crucially, @TimDiekmann is part of the WG and his repository is already way beyond my naive PoC: https://github.com/TimDiekmann/storages.

I still have some problems with implementing the current methods from Box. While it has not been stabilized yet, there is an API for uninitialized values: Box::new_uninit (Tracking Issue). It doesn't really matter, how the uninitialized API will end up looking, but I'm sure, we want an API for uninitialized boxes. It's currently supposed to work like this:

let mut five = Box::<u32>::new_uninit();

let five = unsafe {
    // Deferred initialization:
    five.as_mut_ptr().write(5);

    five.assume_init()
};
assert_eq!(*five, 5)

The problem is the implementation of Deref. The first dereferencing occurs in the first line in the unsafe block, which returns &MaybeUninit<u32>. The second dereferencing occurs in the assertion, which returns &u32. I don't see a way of implementing this without changing the storage parameter. Specialization is not a solution here because it would require specialization of an associative type which is not really supported (and maybe never will be because of unsoundness).

Does anyone have an idea on how to solve this?

I must be overlooking something; can you clarify why this is problematic? My initial understanding of this is that Box::<u32>::new_uninit() returns a Box<MaybeUninit<u32>>, and five.assume_init() returns Box<u32>. I can't see any inconsistencies between Deref here for these two types, and it's not clear to me why the storage parameter is problematic. But I'm sure I'm overlooking something here.

2 Likes

I think the underlying problem is the modelling here:

struct Box<T, S: Storage = AllocatorStorage<T, Global>>

One of the main uses of boxes is to take an Unsized type, and obtain a Sized handle (the Box), most notably with traits.

This is problematic, here, because what does an AllocatorStorage<Trait, Global> mean? You cannot, in advance, know the alignment and size necessary to store a trait object.

As a result, I think that for Box, unlike for Vec or VecDeque, we need an untyped storage. A storage that is just some bytes, and then one of two things happen:

  • The user provided a storage area that is suitable for the alignment and size of the dynamic type: awesome, it works.
  • The user provided a storage area that is unsuitable for either alignment or size (or both) of the dynamic type: construction fails. Do not pass Go. Do not collect $200.

In Vec and VecDeque the type is Sized, with the static type matching the dynamic type, which has 2 advantages:

  • It gives us the alignment necessary for the storage, allowing the allocation of MaybeUninit<T>.
  • It allows expressing the capacity in terms of number of elements, rather than number of bytes.

However as you are experiencing here, I think typed storage is not suitable for Box.

1 Like