Try_reserve returning non-growable Vec view

For fallible allocations there's been consideration for 3 alternatives:

  • try_reserve() -> Result<()> followed by YOLO push(), or
  • try_push() on the Vec itself,
  • or FallibleVec twin that has all methods as try_*

How about mixing it up, and making try_reserve return a non-growable equivalent of FallibleVec?

let reserved_space = vec.try_reserve()?;

reserved_space.try_push(x)?;
// or
// this may panic if it runs out of reserved space, 
// but never reallocates, never OOMs!
reserved_space.push(x); 

The idea is that the object returned by try_reserve would be a safe wrapper around MaybeUninit<[T; reserved_size]>.

I expect that even when OOM handling is not a concern, this would generate slightly more optimized code thanks to more explicitly guaranteed allocated space and no need for handling realloc.

let mut v = Vec::with_capacity(x);
for item in something_of_len(x) {
   v.push(item);
}

That push() today adds extra code for if !has_capacity { realloc() }. It seems that LLVM is unable to optimize it out.

OTOH:

let v = Vec::new();
let tmp = v.try_reserve(x);
for item in something_of_len(x) {
   tmp.push(item);
}

When push can guarantee fixed-size capacity without reallocations, it optimizes beautifully:

It's not relevant for the Linux kernel, because the plan is that Linux won't use alloc crate at all.

3 Likes

I like the idea of using a different type here, and allowing for more optimization. However, if it still panics when it runs out of reserved space, that would cause problems for environments that shouldn't panic, such as the Linux kernel.

3 Likes

Does the kernel need to avoid all panics, or just those related to allocation failure and unsupported features like floating point?

Or in other words, is a kernel panic okay e.g. when indexing out of bounds? Panicking push is basically the same thing- assuming it's not caused by input from userspace/network/device/etc, and thus not an expected failure mode, there's not much to do in response. AFAIK there's no EKERNELBUG the way there is ENOMEM.

Or in other words, clearly the kernel does have panics and BUG and such- when are those appropriate to use?

Or more generally, Rust relies on panics in a lot of places where C just has undefined behavior and the kernel doesn't want to go in the first place. Is turning those into kernel panics an acceptable way to get a sound language, or is a new language not worth it unless it can catch all of them at compile time?

From what I read of Linus' response to the RFC for allowing rust into the Linux kernel. He is basically against allocation panics. I think he is fine for out of bounds panics since you can avoid them.

But that is just my understanding of it.

In kernel terms, that should probably be an "oops", not a panic. A panic kills the whole kernel; an oops just kills the process that's currently running in the kernel while leaving the rest of the kernel mostly functional (as long as that thread wasn't holding any locks or similar).

3 Likes

Ah, interesting distinction! Do you think Rust panics generally should be treated as "oops"es or would you still want any to be kernel panics? (Or maybe this is already being discussed on the mailing list somewhere?)

Sounds great for when you want to push multiple items and you know how many in advance. But this should exist in addition to Vec::try_push, rather than being a substitute for it. It's important that fallible allocations be ergonomic. If you only have one item to push, or if you want to push in a loop but you don't know the count in advance, try_reserve(1)?.push(item) is needlessly verbose.

2 Likes

Linux is not planning to use the alloc crate and will instead implement kernel-specific containers from scratch.

I think providing a solid no-panic guarantee (as requested by Linus) is a separate problem, e.g. there's no plan to remove Index support from Vec or slices, so the no-panic enforcement must be done in some other way. It can't be done by merely not implementing maybe-panicking interfaces.

Also keep in mind that Rust currently aborts on OOM, and custom OOM handlers are not allowed to unwind, so OOM handling in Rust is very destructive. Replacing risk of OOM-abort with a risk of a mere panic is already a big improvement.

2 Likes

And for now that's clearly the right way to go. Linux can experiment with API design, in a codebase that has zero API backwards compatibility requirements, while preserving the ability to compile old Linux versions with new compilers.

But I'd like to see alloc evolve to the point where Linux could hypothetically move back to it some day. Where, if the functionality had existed today, it would have been a no-brainer for Linux to use it instead of implementing their own.

(In this scenario, Linux might still want some custom container implementations optimized for specific needs, but those implementations would be written on top of alloc and would imitate the API design of standard containers.)

Even if Linux never actually moves back to alloc, Linux's requirements are close enough to those of other kernels, and really any project that wants to handle memory allocation failure (say, Hyper as used by curl), that it makes an excellent reference point.

In particular, those requirements may include not just the ability to handle allocation failure, but the ability to pass some kind of argument to the allocator, corresponding to the flags argument to kmalloc.

I'll be very interested to see how this all plays out in practice.

5 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.