Can and should the compiler prevent certain kinds of stack overflow?

Hi everyone,

I have just traced down a segfault where I am overflowing the stack with a const generic:

fn get_slice<const N: usize>() -> [usize; N] {
    let mut ns = [0; N];
    for i in 1..N {
        ns[i] = i;

fn main() {
    let ns = get_slice::<520192>(); // no problems
    // let ns = get_slice::<522240>(); // `thread 'main' has overflowed its stack`
    // let ns = get_slice::<524288>(); // zsh: segmentation fault  cargo run --release

    let s = ns.into_iter().reduce(|a, b| a + b);
    println!("{:?}", s);

When running with cargo run --release, I get different error messages (see comments behind the respective slices). When compiling manually, I just get the zsh: segmentation fault ./main message.

Let me start off with a few questions:

  • According to SO, stack sizes are configurable. Couldn't the compiler figure out an appropriate stack size to prevent such issues?
  • If that breaks to many things, shouldn't there be knobs for it? Of course I could construct my own configured thread, but shouldn't such overhead be done at compile-time?
  • If for some reason configurable stack sizes are not feasible, shouldn't static analysis be able to reveal my fault and issue a compile-time error?

PS: I got directed here by Github issues, still not quite sure if I would call this a feature request for the compiler or a diagnostics bug, so feel free to tell me where best to file this.

“manual” compilation or running through cargo run shouldn’t make a difference. Note that the errors you get are run time errors, and that both are possible errors you can get on stack overflow in Rust.

As far as I know, static analysis to determine what stack size is necessary to avoid stack overflow is impossible in general. You cannot predict program behavior very well, things like the undecidability of the halting problem quickly bite you with their consequence that you can’t actually statically know what parts of a program will be reached at run time (“you” being the compiler). Since code that is never reached never produces a stack overflow, a compile cannot – for arbitrary programs – accurately predict the necessary stack sizes.

By the way, this is not a problem limited to const generics. In particular recursive functions can easily lead to stack overflows. Furthermore it’s also not uncommon that the stack usage of a program depends on its input and is practically unlimited for sufficiently large inputs. You can’t give those kinds of program enough stack size to outright prevent all stack overflows.

Afaik, you can configure the stack size of a program when starting it. Consequentially, the compiler doesn’t even know – at compile time – what the stack size (of the main thread) at run-time is going to be. Even ignoring the previously described problem that it can (for some programs) be impossible to know the stack-size requirements, the fact that the actual stack size available is also unknown it seems impossible to, reasonably, ever give a compilation error.

Definitely feature request; I see no bug here. Even though, as described above, and as far as I understand you proposals, the feature you’re after might be completely impossible to achieve. Feel free to also link the Github issue that lead you here, in case it gives additional context.


By the way, this is not a problem limited to const generics

Rephrased the title, as clearly this is more about slices and stack overflows than const generics. I agree to your analysis and was having the halting problem in the back of my mind. However I am still thinking, since I am able to configure thread stack sizes in code, I should be able to configure a stack size for my program, e.g. via rustc flag or by giving an attribute to the main function. Otherwise anyone who runs the program needs to know and execute a ulimit -s 16384 before actually running it.

I think it's important to note that most functions used in practice are quite trivial for compiler to analyze maximum stack usage. The whole async stuff can not work without calculating exact bounds for tasks's virtual stack. I would love to have function attributes like #[bounded_stack] with compiler intrinsics allowing to get maximum stack usage for a given bounded function. But, unfortunately, similarly to #[no_panic] such feature is far from trivial to implement.


The Linux kernel would benefit from such a mechanism as well. For C code, Linux uses -Wframe-larger-than=.

1 Like

Can you give a link or elaborate why the #[no_panic] is hard to implement? I would actually assume that would mean something like:

  • Check that there is no panic! call in the function body.
  • Check that all called functions are marked #[no_panic]

The second requirement to me seems eerily similar to #[derive(Copy)], which is possible iff all fields in a struct implement ´Copy´. Similarly, if a function only calls functions that are marked as #[no_panic], it is allowed to be marked #[no_panic], but we need the first requirement as well.

There are A LOT of places where Rust code may implicitly panic. Most of those places can be eliminated by optimization passes. But it means that we can not verify that function indeed does not contain panics without running those optimization passes.

Let's take a look at the following code:

const N: usize = 64;

pub fn sum(buf: &[u8; N]) -> u16 {
    let mut sum = 0u16;
    for i in 0..N {
        sum = sum.checked_add(buf[i] as u16).unwrap();

In debug build the function will contain at least 2 panics (bound check and uwnrap). But with optimizations compiler is able to remove all panics. So it looks like we are in our full right to mark this function as #[no_panic]. What if we change N to 128? Simple math tells us that overflow can not happen until N is bigger than 256. So it should be fine, right? No. Unfortunately, the current compiler version is not smart enough to remove the unwrap panic.

#[no_panic] is part of public API, but it depends on how smart the compiler is. It means that regression in optimization passes may break public API! In the worst case it would mean that compiler update may break builds. And I am not even touching interaction with generics (though #[no_panic] could have been really useful for const generics) and how compilation target may influence existence of panics (e.g. TryInto from usize to u32 can not panic on 32-bit targets, but not 64-bit ones). Rust simply lacks facilities to properly guide compiler and prove various stuff about code.

Yes, we could introduce a very restrictive #[no_panic] MVP, which would require that code does not panic even in debug builds. But obviously it will be hardly practical and it will be really hard to extend it reliably. At the very least it would have to rely on MIR-level optimization passes, not on LLVM ones.


There's also the for loop in your code example. A for loop desugars to code involving .into_iter and .next-calls, both trait methods. How would no_panic interact with traits? Traits in generic code are highly nontrivial; this includes for example closures - arguably one would want a somewhat capable "effects system" here where a function like .for_each(...) on an iterator might be no_panic precisely if and only if the iterator's next method is no_panic and the passed closure is no_panic, whatever that means in details. And that's then the kind of feature that you rightfully call "far from trivial to implement".

As a minimal thing to support, perhaps concrete trait implementations could accept no_panic attributes that then work in non-generic code, but quickly reaches its limits; (at least currently) the implementation of Iterator for Range is actually generic and involves an internal Step trait. So there needs to be some basic support of no_panic in generic functions with trait bounds, or the standard library would need to be refactored, or for loops over ranges just don't work in no_panic either.


One huge issue is generics. Option::map doesn't panic unless the user closure panics. So just an attribute doesn't really work well -- it wants to be a whole system of being dependently no-panic, but that's a mess.

See the long C++ history of throw(...) exceptions specifications and noexcept.


The same issue exists for const fn? In (far) future we can do:

const fn map_const<U, F: ~const FnOnce(T) -> U>(self, f: F) -> Option<U> {
    match self {
        Some(x) => Some(f(x)),
        None => None,

So it may be good to generalize the const fn infrastructure to handle more properties like no_panic, plus generic abilities. Then we can solve the problem with Option::map:

// ignore syntax for now
#[property(P subset of const + no_panic + no_oom + no_recursion + ...)]
P fn map<U, F: ~P FnOnce(T) -> U>(self, f: F) -> Option<U> {
    match self {
        Some(x) => Some(f(x)),
        None => None,
1 Like

What if we make a special attribute, that tells caller to ensure sufficient stack size? MVP is get this working for thread spawn function, also for main

How will that work with function pointers, trait objects or extern "C"?

Very good question. Going back to the suggestion made by @HKalbasi, there could this could be something like a compiler-checked guarantee. If you call C functions, the compiler cannot check it, hence no guarantee.

I don't extern "C" is to work with the attribute;

regarding vtables and other function pointer involving stuff, this info is just a piece of desired metadata.

this metadata mechanism has to be implemented somehow, which is at best in progress...

It would be nice to get a warning when we are statically aware of a certain frame size, yes, likely one chosen by the programmer, but stack size allocated to a thread or process on startup is a deeply platform-dependent detail.

And by "platform" I don't mean e.g. "LInux", I mean "different instances of Linux can change this":

1 Like

I'm pretty sure this is technically feasible to do. At least, preliminarily. Has anyone considered that adaCore solved this (on a basic level) already via the gnatstack utility? Gnatstack calculates the stack usage of a program, from what I've read, though I don't actually have the tool to test it. Implementing something similar for Rust, I think, is possible. We won't be able to detect all stack overflow conditions, but this topic is only about certain ones, so...


I think there's definitely some room for a lint here, though I don't think it should be an error by default. If the compiler notices that a certain (monomorphization of) a function definitely requires more than a certain amount of stack memory, it would fire, ideally identifying why ("you may want to change this huge array to a Vec").

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.