Bring segmented stack back to embedded targets

Rust moved away from segmented stack around 2015 because of the hot split problem.[1] On 64-bit platforms with virtual memory, where we can map virtual pages to physical frames on demand, segmented stack has less advantage.

However, on embedded platforms, where tasks directly read from or write to physical memory, segmented stack shows significant benefits.

First, we can grow the size of the stack on demand and free the unused portion when functions return rather than allocate it statically. This improves the memory efficiency on microcontrollers whose typical RAM sizes are around 100KiB.

Second, segmented stack provides us a mechanism to prevent the stack from overflowing into other memory segments. With segmented stack, each function prologue checks the remaining free space and calls __morestack when it is about to overflow.[2] The runtime, which implements __morestack, also gets a chance to gracefully kill a task if the whole system is running out of memory.

Being able to prevent stackoverflow is crucial because the safety provided by Rust's type system implicitly assumes that the underlying memory behaves obediently, but an overflowing stack easily breaks the assumption.

Embedded systems are places that often lack hardware protection, where the safety provided by the Rust language truly shines. Bringing segmentation stack back to embedded targets can resolve the loose ends.

A discussion about bringing back segmentation stack happened four years ago.[3] Has anything changed since then? Does my argument make sense? How hard is it to implement?

1 Like

This discussion seems related:

It would be cool if Rust were low-level enough language to let stack allocation strategy be implemented in the library.

Preface

I know nothing about how the compiler is written internally, so I have no idea if this could work or not. If it can't work, I'd appreciate it if someone would explain in detail why, just so I can learn more. Thanks!

The idea

We already have std::alloc which lets us specify the global allocator we're going to use; is it not possible to do something similar for a stack allocation strategy? The default allocator could then be the current allocator, but for those systems that need something different it would be a matter of setting the right attribute and linking in the right crate. Given that there seems to be several possible strategies that one might want to use when implementing stacks, this may be a good option as the developers for a particular platform will then be able to optimize the stack handling routines for their platform. It might even give use other, more interesting options, like a stack strategy for debugging vs. release mode, etc. And, it means that the compiler team won't have to maintain multiple stack strategies as those are now externalized to other crates.

stacker partially accomplishes this - it allows you to dynamically allocate a new stack, but requires manually calling stacker::maybe_grow with a closure.

1 Like

stacker is indeed useful. Even rustc uses stacker to alleviate the possibility of stackoverflow.[1]

But one big problem with stacker::maybe_grow is that it requires us to manually set the red_zone parameter, where in rustc it uses a heuristic value.[1] We may still run into a stackoverflow if the function (closure) we invoke inadvertantly uses a stack frame larger than red_zone.

Ideally, the compiler should calculate and pass in the red_zone argument. A related question is asked on StackOverflow.[2] However, it seems there is no way to achieve that.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.