Make 'index out of bounds' a compile-time error



Started reading The Book yesterday and this puzzled me. From “Arrays have type [T; N]. […] The N is a compile-time constant, for the length of the array.” But the following code doesn’t produce a compile-time error. Can this be changed? FWIW, there are even some issues caused by the current behavior on this issue tracker.

% cat src/

fn main() {
  let xs = [1,2,3];
  println!("out of bounds: {}", xs[3]);

% cargo build
   Compiling array-bounds v0.1.0 (file:///home/nikita/rust/array-bounds)
% cargo run
     Running `target/debug/array-bounds`
thread '<main>' panicked at 'index out of bounds: the len is 3 but the index is 3', src/
Process didn't exit successfully: `target/debug/array-bounds` (exit code: 101)


Not easily.

Rust would need to be able to do integer theorem proving magic to prove if that integer is within bounds. With literals of course this isn’t hard, but when the integer is a variable it becomes much more complex.


Let’s start with the simple cases. Then we’ll worry about the hard to crack, theorem proving, AI-level problems, later.


FYI, clippy will catch this error:

src/ 3:40 error: const index is out of bounds, #[deny(out_of_bounds_indexing)] on by default
src/     println!("out of bounds: {}", xs[3]);


Value Range Analysis plus Slice Length Analysis should be inside the compiler, not in a lint.


This seems fine for Rust 2.0, whenever that happens.

As an aside, I know there’s the rust-2-breakage-wishlist tag to track things that sound nice but can’t be implemented now because of the stability rules. There are only four issues tagged with this right now. Is this something that should be used more often (i.e. even for stuff that we may not end up choosing to do in 2.0. It seems like a good source of potential ideas to review when the time for 2.0 comes, and it would be a shame to forget a potentially good idea because it didn’t meet the standard for the tag.


Its not obvious to me that literal out of bounds accesses of static length arrays are a common enough or serious enough problem to justify either a) prioritizing the implementation effort, b) creating some sort of special pass in the compiler separate from typeck.

Though the proof rule is simple for the theorem that 3 !< 3, the infrastructure for performing that analysis at all is a big addition to the language.


Spotting as many problems as possible at compile-time is what Rust is meant to do :slight_smile: Statically reasoning about array and slice bounds, index bounds, value ranges, and so on, is a group of topics the Rust compiler/language is still weak at.

With Slice Length Analysis you can statically spot some bugs where slices are copied, diced and assigned, and you can remove the related run-time bound tests. With Value Range Analysis you can discard some more bound tests. With the usage of some (optional) strongly typed array indexes you can remove some run-time array bound tests. All those features work together, reinforcing each other.

As usual I don’t know if the implementation effort is worth it…


Having such checks which exist in the most simple cases but not in visually-similar complicated cases makes me uneasy. It should be obvious what the boundaries are for such a check. Whatever the boundary is, it should be chosen such that the boundary is obvious. “All const indexes” is an acceptable boundary, though a bit blurry. “All literal indexes” is a better, but very limited boundary. “All indexes” needs dependent types.

Often the thing that is logical from the implementation side is not logical from the usage side. For example, it seems pretty easy to make it so that we can catch both:

let array = [1,2,3,4,5];
let x = 7;


However, there is a very blurry boundary on where this check will and won’t work from the user’s side. Does it work for just simple let-and-use? Can it work for more complicated cases? Can I rely on this? (no).

This goes against the grain of other Rust safety features where their realm of applicability is pretty clear and you know you can rely on it.

TLDR: Don’t define safety checks based on what is easy to implement. In this case, make it work for all literals or all consts but not other mashups.


You need dependent types, so probably not?

Dependent types are a really nice thing to have, but I don’t see them coming to Rust.


You don’t rely on the Rustc for catching all your bugs. You can’t, unless you prove the code (and even that isn’t enough to rule out all run time bugs).

So I don’t understand what’s the problem of catching some bugs at compile-time instead of having run-time bound errors. If the compiler catches some of your array bugs (and optimizes away some bound tests at compile-time) then it’s nice, otherwise they will be caught at run-time. What’s bad in this? The alternative is worse, catching all of them at run-time.

In the list of features I’ve listed for Rust there isn’t dependent typing.

(Note: I have strong opinions on this whole topic).


But I do rely on rustc for catching all of certain kinds of bugs. Well designed type systems are complete in the guarantees they provide. I can’t sometimes pass an invalid type because the signature is too complicated, that would be a soundness bug.

I think checking all indexes by constants as @Manishearth suggested would be a behavior that I could understand what to expect from, but I also don’t think its worth the effort. My experience has been that I get indexing errors when either the index or the array is of a dynamic length, not when both of them are known. Perhaps after const parameterization it would be more useful, because then I might be writing functions like this:

fn second<T: Copy; n: usize>(arr: &[T; n]) -> T {

But that’s a long way off.


This is right, but if you use a static code analysis tools (like PVS-Studio) you eventually learn that spotting bugs statically in your code is essentially a statistical thing. You start reasoning about “average number of bugs for 1000 lines of code”, about reducing bug incidence, about the new bugs spotted in your codebase when you just update the version of PVS-Studio you’re using.

What I am talking about is not exactly a soundness problem because what isn’t caught at compile-time is later caught at run-time. So what changes is just when you see the problem (and for me, the sooner the better). It’s similar to the compile-time removal of bound tests in a piece of Rust code. Currently you can’t be certain the compiler will remove a certain bound test, it’s an optimization that’s left to LLVM, that follows some rules but can’t be fully relied on, I think. So it’s a “best efforts” thing. Once the complexity of the compiler goes past a certain point, the programmer loses the ability to understand why a certain part of code is optimized in a certain way. I am willing ot accept the same uncertainty, if the gain is having some of the bugs spotted at compile time.

This important feature is of course a part of this discussion. Slice Length Range Analysis becomes rather less useful if you don’t even carry around the length of your fixed length arrays. Type-level integers are probably coming to Rust, eventually, they are too much handy.


Unless you really need random access indexing, using some kind of iterator over mutable elements solves the problem nicely for >80% of cases.

In those cases indexing is an antipattern (and will be slower because the compiler cannot omit the bounds check. Good for us!). So if we were to invest into catching any but the simplest cases, we’d be bolstering the wrong abstraction. Sure, there are cases where you really need random access, but they are actually less common than people think.


Can you give me a link that shows where that 80% comes from?

Think about having to translate C/D code to Rust.

I think those cases are more common that you think.


I really can’t understand the resistant against this good idea. I really am with nkaretnikov and leonardo.


rustc already warn on divide by zero, I think giving warning for simple case of ‘index out of bounds’ error is also good idea.

fn main(){
    let zero = 0;
    10/zero; // no warning
    10/0;    // warning: attempted to divide by zero, #[warn(const_err)] on by default


So, that’s a lint. And Rustc has a policy of no new lints, since they can be included in clippy.

This lint is already included in clippy. I’d be happy to upstream it to rustc, but I suspect that needs an rfc to happen.


In essence:

  • It can’t be an error without being a breaking change. We’re wary about those
  • It only catches a tiny fraction of indexing issues, of a type which are pretty uncommon in Rust code in general.
  • It already exists as a lint in clippy, so it’s no big deal


Unfortunately, Rust is not magical.

Detecting these errors at compile time, carries a number of significant trade-offs. First of all, the majority of out-of-bound cases cannot even be detected, since they either have an arbitrary value or a very little constrained one. This means that they have no usable bound, and thus no data to argue about the value, or a potentially OOB (which is actually unreachable).

So, this will in the majority of cases force the programmer to make a bound check anyways (through a if or match), providing no better or worse performance than the implicit one.

“Free” manual checks are already possible through the widely-used get() method.

For statically detecting out of bound conditions, you’ll have to introduce a number of otherwise ineffective constructs to the type system:

  1. Non-local typestate analysis. Meaning that you cannot analyze on a function level. You’ll have to type check the program as a whole. This has an exponential complexity. For any non-trivial program the compile time will sky-rise (my guess is about 50-70x). Typestates have been proven ineffective over time, due to its many limitations, especially since it is an extremely “pessimistic” form of analysis. Typestate bound checking is a very stupid one compared to, for example, LLVMs bound check elision which is very sophisticated and extensive one, so chances are that you’ll not get any advantages of a typestate bound checking over LLVMs optimizations.

  2. Structural (non-lexical) subtyping. This is a very complex construct to introduce in a type system with efficient type inference. Type inference will be practically impossible with this kind of structural subtyping.

  3. Non-trivial dependent typing. You’ll have to argue about the state of the program based on the control path, information about the previous states of the program, non-local type information, and. This construct severely increases the complexity and compile time of the language.

There are so many things, you have to trade for a mostly useless mechanism, which will enforce the programmer to do the bound check manually most of the time anyways.

A 100 times easier, yet just as efficient, solution is to remove panicable Index, but that seems like a bad idea too, since OOB is normally unrecoverable, and using get() will make the language very verbose.

So while it is possible to create a mechanism which detects OOB, it cannot be perfect (it’ll have false positives), since the problem is undecidable. The best and simplest solution is range constraints for integers, but even that is almost useless, due to most integer being arbitrary.

A lint for const exprs is certainly possible, but a generalized solution is not. I’m sorry, but Rust cannot solve the halting problem (I wish it could :disappointed_relieved:).