There should be an easy way for users to control where panics may or may not happen. Being a Systems Language™, some of Rust’s use cases simply cannot afford panic at all.

What if it was simply

fn main() { ... }

C++11 has noexcept, meaning no exception will bubble up from the call. Since panics cannot be caught, this would be recursive for Rust.

There are performance considerations as well. Scott Meyers, Effective Modern C++:

The difference between unwinding the call stack and possibly unwinding it has a surprisingly large impact on code generation. In a noexcept function, optimizers need not keep the runtime stack in an unwindable state if an exception would propagate out of the function, nor must they ensure that objects in a noexcept function are destroyed in the inverse order of construction should an exception leave the function.

unsafe should be allowed, a recursive restriction would not be useful without it. (The verdict for purity was kill it with fire). As always, unsafe means the compiler cannot prove the code’s correctness, it should be looked at early and often, don’t use it if you don’t want to crash etc.

We might end up with two sets of functions for everything, but in this case, I don’t see that as a bad thing.

#[no_panic] again

well… one could change every function except for unwrap/expect or similar to use Result, and then you would not need two sets of functions. Having set no_panic will require one to match on all Results instead of unwrapping them. Even without no_panic this would be sensible and is already done in large parts of std.

The main problems i see are convenience. You would not be able to use the index operators or many of the convenient memory allocating functions. But then again, that sounds just like what you want to enforce.

how do you figure that? you can call no_panic functions from normal functions. And everything you need to use from both should be returning Results


Why introduce a separate attribute for this? I think it would be more useful as a standard warning lint (assuming that is possible.)

To forbid panics entirely: you would simply use the #![deny(panics)] attribute at the crate-level, or attach it to fn main(). Perhaps more importantly: with this approach you could also enforce the lint as a warning.

More often than not: I don’t mind if my code panics, but I would like to be warned when I introduce such a scenario; then I can reason about the consequences of panicking in that particular task.

So in my experience it would be nice to turn on #![warn(panics)] at the crate-level and then #[allow(panics)] after I’ve vetted a particular function.


i don’t think that can work… it would only forbid (explicit) panics in your code (calling unwrap or such won’t get caught). if you forbid it for libraries, it would warn/error on every call to panic, no matter if your code doesn’t call it.


Perhaps it’s just best to disable all the panics for now (turn them into abort() calls)?

They are less than useless in the current state and their tainting interaction with mutex guards and Arcs is definitely unexpected for C++ developers.


Panic is very important for certain use cases, including actually-deployed rust code today.


I think #[no_panic] with C++ noexcept semantics is a good idea.

Note that this has nothing to do with a potential #![deny(panics)] lint. Pretty much all rust code can potentially panic (due to bounds checking, and possibly overflow checking in the future). A #[no_panic] function should be allowed to call other functions that might potentially panic, without any compiler warnings. The assumption is that the code ensures the panic can’t happen at runtime (for example, I often know my array indices are not out-of-bounds even if the compiler doesn’t).

Of course, the question then becomes: what are semantics if a panic does happen in a #[no_panic] function? The C++ solution here is:

If a search for a matching exception handler leaves a function marked noexcept or noexcept(true), std::terminate is called immediately. It is implementation-defined whether any stack unwinding is done.

Here is my suggestion for the #[no_panic] semantics in rust:

  • A caller of a #[no_panic] function is guaranteed that we won’t attempt to unwind out of the function.
  • If a panic does happen within a #[no_panic] function, the process is aborted.
  • There is no guarantee how many Drop impls will run while unwinding prior to the process being aborted (but if a Drop impl does run, all other Drop impls that should have run prior are guaranteed to do so, in the correct order)

Apart from documenting that a function isn’t supposed to panic, a #[no_panic] attribute with these semantics should allow the compiler to optimize away the overhead of unwinding – the stack doesn’t have to be kept in an unwindable state, and the compiler can avoid code bloat due to unwinding tables and landing pads. (#[no_panic] functions don’t need those if “abort the process” is the default when unwinding information is missing)


Isn’t that nearly the same as putting a destructor that aborts the process on the stack? We could simply have the compiler detect that pattern…


There would be no difference whatsoever between normal Rust code and #[no_panic] then.


There’s a major difference between #[no_panic] and using a destructor to abort the process: because #[no_panic] may stop unwinding before reaching the #[no_panic] function, it effectively applies recursively. Marking a function as #[no_panic] allows the compiler to avoid generating unwinding code not only for that one function, but also for all other functions in the call graph that are only reachable through #[no_panic] functions. Also, given that it’s part of the function signature, it can be used when optimizing across crate boundaries. (we can’t implicitly infer #[no_panic] across crates if we want a stable ABI)

Yes, the main usage would be as an optimization hint to avoid code bloat. But it’s also a contract with the caller: no matter what parameters are passed, if this code panics, it’s a bug in this piece of code (not in the caller). This might be relevant information for future tools that generate unit tests.


This sounds like unsafe{} with a twist: It allows optimizations to run that might completely break safety if a panic actually occurs. I prefer the original suggestion with this being recursive. If you want a way out to call a potentially panicing function that you know won’t panic due to checks you made, then you have to use unsafe. On the other hand, functions returning Result instead of panicing already do those checks for you. So you only need to call the Result variant instead of the panicing variant.


Why would it break safety? Aborting the process isn’t unsafe: it can happen at pretty much any time in safe rust code due to the linux OOM killer, the user manually kiling the process, the system losing power, etc…


so basically you want panic!() to be abort() inside functions that are marked with no_panic?


And all functions that are called from a #[no_panic] function.


We don’t need to infer #[no_panic] if it’s part of the ABI. C++ noexcept is included in the signature and will result in a link-time error if your header doesn’t match the object file.

But I don’t want just an optimization. I want a compile-time check wherever it’s feasible. That’s why I use Rust in the first place.


ABI-compatible cross-crate optimizations are a reason to have (some kind of) #[no_panic]; I was arguing against pcwalton’s “We could simply have the compiler detect that pattern…”.

I don’t think it’s feasible to statically guarantee that a function does not panic, because pretty much everything in Rust can panic. If the checked overflow RFC is accepted, any integer arithmetic may panic. Divisions by zero are already panicking, so is division completely forbidden in any #[no_panic] function? And considering that any function call may panic due to stack overflow, do we forbid all function calls?


I’m somewhat uncomfortable with changing the semantics of the language based on what an undecidable control flow analysis (whether function A is always called by function B) decided to do. I’d be more comfortable with explicitly annotating functions in which failure becomes an abort. If mass-annotating functions in this way is a burden, we can solve that problem in sound ways (e.g. mass attributes that apply to an entire module).


Perhaps it would make sense to have two different attributes for this? One (maybe called #[no_panic_check]) would do as @Jurily suggests and recurse into a function, refusing to compile if any of the called functions can panic. The other (maybe called #[no_panic_guarantee]) would act more like C++11’s noexcept and act as a promise to the compiler that you have thought about your function’s edge cases, divisions by zero, etc and are convinced that it will not panic (barring a stack overflow or other pathological case). #[no_panic_check] would assume that functions marked #[no_panic_guarantee] will not panic when doing its compile-time check, and inside a #[no_panic_guarantee] panics would act like aborts.

I believe this would make it feasible to have some degree of compile-time panic-checking, while also addressing many of @dgrunwald’s concerns. And I believe #[no_panic_check] would allow the same optimizations as C++11’s noexcept, which is a huge bonus as well.


I think it would indeed be useful to have a way to optionally use, and optionally enforce the use of, checked arithmetic. I am hoping someone will figure out convenient ways to bake the numeric ranges of numbers into their types and do all of the checked arithmetic mosly implicitly at compile time. Ada has language level support for this. I suspect Rust’s typesystem isn’t strong enough to do it yet; seems like you would need generics/templates with float/integer arguments…

This would be particularly useful in autopilot code where a code crash could mean a plane crash. (Though of course you can have other safety mechanisms to help deal with that event)