Some preliminary notes on array bound tests


This post contains some preliminary notes on topics that I’ll discuss better in future.

This is a little but fast enough Sudoku solver written in C (modified from here: ):

#include <stdio.h>

void show(int *x)
    int i, j;
    for (i = 0; i < 9; i++) {
        if (!(i % 3)) putchar('\n');
        for (j = 0; j < 9; j++)
            printf(j % 3 ? "%2d" : "%3d", *x++);

int trycell(int *x, int pos)
    int row = pos / 9;
    int col = pos % 9;
    int i, j, used = 0;

    if (pos == 81) return 1;
    if (x[pos]) return trycell(x, pos + 1);

    for (i = 0; i < 9; i++)
        used |= 1 << (x[i * 9 + col] - 1);

    for (j = 0; j < 9; j++)
        used |= 1 << (x[row * 9 + j] - 1);

    row = row / 3 * 3;
    col = col / 3 * 3;
    for (i = row; i < row + 3; i++)
        for (j = col; j < col + 3; j++)
            used |= 1 << (x[i * 9 + j] - 1);

    for (x[pos] = 1; x[pos] <= 9; x[pos]++, used >>= 1)
        if (!(used & 1) && trycell(x, pos + 1)) return 1;

    x[pos] = 0;
    return 0;

void solve(const char *s)
    int i, x[81];
    for (i = 0; i < 81; i++)
        x[i] = s[i] >= '1' && s[i] <= '9' ? s[i] - '0' : 0;

    if (trycell(x, 0))
        puts("no solution");

int main(void)
    int i;
    for (i = 0; i < 1000; i++)
        solve(  "5x..7...."
            "....8..79" );

    return 0;

A Rust port:

const N: usize = 9;
const NN: usize = N * N;
type Table = [u32; NN];

fn show_table(tab: &Table) {
    let mut pos = 0;
    for i in 0 .. N {
        if i % 3 == 0 {
        for j in 0 .. N {
            if j % 3 == 0 {
                print!("{:3}", tab[pos]);
            } else {
                print!("{:2}", tab[pos]);
            pos += 1;

fn try_cell(tab: &mut Table, pos: usize) -> bool {
    unsafe {
        if pos == NN {
            return true;
        if *tab.get_unchecked(pos) != 0 {
            return try_cell(tab, pos + 1);

        let row = pos / N;
        let col = pos % N;

        let mut used: u32 = 0;
        for i in 0 .. N {
            used |= 1u32.wrapping_shl(tab.get_unchecked(i * N + col).wrapping_sub(1));

        for j in 0 .. N {
            used |= 1u32.wrapping_shl(tab.get_unchecked(row * N + j).wrapping_sub(1));

        let round_row = row / 3 * 3;
        let round_col = col / 3 * 3;

        for i in round_row .. round_row + 3 {
            for j in round_col .. round_col + 3 {
                used |= 1u32.wrapping_shl(tab.get_unchecked(i * N + j).wrapping_sub(1));

        *tab.get_unchecked_mut(pos) = 1;
        while *tab.get_unchecked(pos) <= N as u32 {
            if used & 1 == 0 && try_cell(tab, pos + 1) {
                return true;
            *tab.get_unchecked_mut(pos) += 1;
            used >>= 1;

        *tab.get_unchecked_mut(pos) = 0;

fn solve(s: &[u8]) {
    if s.len() != NN {
        println!("Wrong input length.");

    let mut tab: Table = [Default::default(); NN];
    for i in 0 .. NN {
        tab[i] = (s[i] as char).to_digit(10).unwrap_or(0);

    if try_cell(&mut tab, 0) {
    } else {
        println!("No solution.");

fn main() {
    for _ in 0 .. 1_000 {

I compile the code with:

gcc v. 4.9.1
rustc 1.10.0-nightly 2016-04-20

rustc -C opt-level=3 -C prefer-dynamic -C target-cpu=native
gcc -Ofast -flto -Wall sudoku1.c -o sudoku1

The C version (with 1000 loops) runs in about 0.24 seconds, while the Rust version in about 0.26 seconds, so their run times are close enough.

But If I replace the get_unchecked/get_unchecked_mut with normal safe array accesses, the run-time of the Rust code is about 0.31 seconds. For some kind of usages this is a bit too much difference in run time.

Probably you can write down proofs that the array accesses in that try_cell() function are always in-bound (and perhaps languages like Dafny or Whiley are able to prove it after you have written some contracts and invariants). But I think there are features that are simpler to implement and to use, and faster to compile, that are able to remove all (or some) array bound tests in the try_cell() function. I’d like some of such features (tied together) in Rust, partially derived from Ada language and other sources, that should help improve safety and/or efficiency of Rust code. I think it’s quite important for the design purposes of Rust to allow to write programs that are both safe and very fast.

I’ve seen that the compiler seems able to remove the array bound tests for the “pos” index in try_cell() function, but not the others three array accesses inside try_cell(). The other missing three can be removed with Value Range Analysis.

Beside Value Range Analysis (and Slice Range Length Analysis) and few other features, a couple array/slice methods could be useful:


They access the array without run-time bound tests, but they give a compilation error if the compiler isn’t statically able to see that the index will never access out of the bounds at runtime. So they are always safe and always fast, if they compile.


I wonder if you could turn this into a more general compiler feature, something like static_unwrap() which fails to compile if it can’t be proven not to panic. Then your get_verified() might just be get().static_unwrap(). But I fear this probably depends too much on optimization analysis.

#3 outlines another approach; instead of trying to make the compiler prove that indexing with integers is safe, you can explicitly construct indexes which are proven safe by the type-checker.


You need something that is industry-strength, efficient to compile and easy to understand.

I think to face this array bounds problem (and some other related problems with arrays and more) you need to use more than one approach at the same time. And using indexes that are typed to be safe since their creation is surely another option, that doesn’t replace VRA, but completes it.


Uhm… Ada uses this pattern since '83, Not sure how much more industry-strength you need.


Sorry, I’m not as familiar with Ada as I wish I was. What approach do they use? (Also, do you have any good suggestions for a quality book to read that gives a detailed look? I enjoy reading those sorts of things in my spare time…)


I meant to say that if you want to use that approach in Rust, you need a robust (and probably inside the compiler) implementation. I am sure the Ada implementation is rock-solid, but the crates/indexing implementation is an experiment, it’s not for production.


I was planning in writing posts here about such topic, with explanations, examples, possible syntaxes for Rust, and so on. But I am rather slow, because currently I have a quite different job.

Ada uses various strategies. I think that Rust should use some combined strategies. Designing such things as isolated features for Rust is not the best idea, in my opinion.

The group of approaches I’d like for Rust is composed of five features:

  • Ranged Integral Values (as in Ada, perhaps similar to RFC #671);
  • Optionally Strongly Typed array/slice indexes (as in Ada);
  • User-Defined Static Pre-Conditions (similar to Ada 2012);
  • Value Range Analysis (like in D, but better);
  • Slice Length Range Analysis (similar to VRA above, but for slice lengths);
  • Plus few little bits like the get_verified() shown above.

Those features are easy to use, some of them are automatic, and working together they avoid problems and bugs, increase performance (avoiding some run-time tests for array bounds, slice bounds, integral overflow tests, and more, even for some user-defined types).

It’s some stuff for Rust to implement for Rust compiler and Rust syntax, but I think they are fit for Rust design purposes.

Note that those features are “low powered”, they are not general & powerful as dependent types or proofs (like in Agda, Coq, Idris, Whiley, ATS, F*, and so on), but they are easy to use, and probably sufficiently fast to compile even for large programs (unless you want to extend Value Range Analysis across functions, but this gets hairy). The downside is that group of approaches, while probably increasing code reliability, don’t make the code provably correct, so they aren’t good enough for high integrity applications.

“Ada for software engineers”, by Mordechai Ben-Ari surely explains the strongly typed array indexes.

Online you can probably find documents about Ada 2012 that show the static preconditons.

“Building High Integrity Applications with SPARK” by McCormick and Chapin probably contains some more general notes.


John Barnes’ books are the go-to books for detailed feature explanations. They also contain reasons for design decisions.

There’s also the Rationale, which is the official documentation on why design decisions were made. But it’s somewhat hard to read imo.

When declaring an integer type, it can either be a new type with values valid in a certain range, or a subtype of another integer type, with a range that fits into the super-type’s range.

When declaring an array type in Ada, one also has to declare the index type that the array is supposed to use. Ada works mostly with fixed-size-arrays (it has a Vector in the standard library, but that one doesn’t do more wrt index checking than Rust’s Vec). The array size is determined by the index type’s covered range. Thus, any value of the index type is a legal index into the array.

What’s also legal is to use an enum as the index type. This way one can index into the array with the enum values and never worry about out of bounds, because the array contains a value for every possible enum variant.

Some great array related features in Ada, expressed in hypothetical Rust features:

let slice: &[u8; 42] = &array[100...142]; // compile-time slice-length -> reference to fixed size array
for i in array.indices() { // hypothetical function that returns `0..array.len()`, but of a newtype with a phantom borrow of `array`
    let x = array[i]; // no array length check
    let y = array[i/2]; // no array length check
    let y = array[i/2 + x]; // no array length check if array length is statically known to be > x*2

oh, you meant the implementation, sorry, I misunderstood. But it should still be possible to write some of these features in a visually pleasing way in a crate. Others definitely require compiler support.


This is the Optionally Strongly Typed array/slice indexes in my list above (in Rust this must be an optional feature), it works together with some other features of my list, like Ranged Integral Values and probably with Slice Length Range Analysis too and more.


Does this mean that whether or not your program compiles depends on what the optimizer is able to prove? That seems like it might be hard to keep stable from one release of the compiler to the next.


If you introduce Value Range Analysis (VRA) in the Rust front-end, then its capabilities become a requirement for all Rust front-ends, it isn’t an optional optimization pass.

In D language this code compiles, despite args.length is of type usize, because basic VRA sees that the result of the “usize%100” always fits in an u8:

void main(in string[] args) {
    ubyte n = args.length % 100;

So it’s not a matter of optimization, but what the compiler is able to prove, and this is part of the compiler specs.

I think this means that a function like get_verified() should work with what the Rust front-end is able to infer, and not after LLVM optimization passes…