I've decided to write this post after this Reddit thread:

A little Rust program that computes a Hamming Number:

```
use std::num::NonZeroUsize;
fn hamming(n: NonZeroUsize) -> u64 {
let n = n.get();
let mut h = vec![0; n];
h[0] = 1;
let mut x2 = 2;
let mut x3 = 3;
let mut x5 = 5;
let mut i = 0;
let mut j = 0;
let mut k = 0;
for idx in 1 .. n {
h[idx] = x2.min(x3).min(x5);
if h[idx] == x2 {
i += 1;
x2 = 2 * h[i];
}
if h[idx] == x3 {
j += 1;
x3 = 3 * h[j];
}
if h[idx] == x5 {
k += 1;
x5 = 5 * h[k];
}
}
h[n - 1]
}
fn main() {
let nz = |x| NonZeroUsize::new(x).unwrap();
// See https://github.com/rust-lang/rust/issues/73121
println!("{:?}", (1 .. 21).map(|i| hamming(nz(i))).collect::<Vec<_>>());
println!("{:?}", hamming(nz(1_691)));
}
```

LLVM of the the last Nightly rustc with "best" compilation flags leaves about five panic_bounds_check:

```
.LBB3_9:
lea rdx, [rip + .L__unnamed_1]
call qword ptr [rip + core::panicking::panic_bounds_check@GOTPCREL]
jmp .LBB3_12
.LBB3_15:
lea rdx, [rip + .L__unnamed_2]
mov rdi, rax
call qword ptr [rip + core::panicking::panic_bounds_check@GOTPCREL]
jmp .LBB3_12
.LBB3_19:
lea rdx, [rip + .L__unnamed_3]
mov rdi, rax
call qword ptr [rip + core::panicking::panic_bounds_check@GOTPCREL]
jmp .LBB3_12
.LBB3_23:
lea rdx, [rip + .L__unnamed_4]
mov rdi, rax
call qword ptr [rip + core::panicking::panic_bounds_check@GOTPCREL]
.LBB3_12:
ud2
.LBB3_32:
call alloc::raw_vec::RawVec<T,A>::allocate_in::{{closure}}
ud2
.LBB3_33:
mov rdi, rsp
call alloc::raw_vec::RawVec<T,A>::allocate_in::{{closure}}
ud2
.LBB3_11:
lea rdx, [rip + .L__unnamed_5]
call qword ptr [rip + core::panicking::panic_bounds_check@GOTPCREL]
jmp .LBB3_12
mov rbx, rax
mov rdi, rsp
call core::ptr::drop_in_place
mov rdi, rbx
call _Unwind_Resume@PLT
ud2
```

I could rewrite the code in WhyML ( http://why3.lri.fr/try/ ), and even the single and limited online solver is able to discard all array accesses as safe (so a Rust compiler can replace them with get_unchecked/get_unchecked_mut). This code has extra invariants because I've added an ensures{} to also verify a small amount of functional correctness (if the result isn't zero then the dynamic programming computation isn't too much botched, considering the array contains only the result of chains of multiplications):

```
module Hamming
use int.Int
use ref.Ref
use array.Array
use int.MinMax
let hamming (n: int) : int =
requires { n > 0 }
ensures { result > 0 }
let h = make n 0 in
h[0] <- 1;
let x2 = ref 2 in
let x3 = ref 3 in
let x5 = ref 5 in
let i = ref 0 in
let j = ref 0 in
let k = ref 0 in
for idx = 1 to n - 1 do
invariant { 0 <= !i < n /\ !i < idx }
invariant { 0 <= !j < n /\ !j < idx }
invariant { 0 <= !k < n /\ !k < idx }
invariant { !x2 > 0 /\ !x3 > 0 /\ !x5 > 0 }
invariant { forall i: int. 0 <= i < idx -> h[i] > 0 }
let aux = min !x2 (min !x3 !x5) in
h[idx] <- aux;
if aux = !x2 then begin
i := !i + 1;
x2 := 2 * h[!i];
end;
if aux = !x3 then begin
j := !j + 1;
x3 := 3 * h[!j];
end;
if aux = !x5 then begin
k := !k + 1;
x5 := 5 * h[!k];
end;
done;
h[n - 1]
end
```

In WhyML integers aren't fixed-size, this makes proofs simpler. Adding a similar proof system to Rust is harder because most numbers are fixed size. Allowing to prove everything that's allowed in Rust is probably of little use and/or too much hard. As ADA-SPARK shows it's OK to allow only a subset of Rust features for the functions that you want to prove some invariants for (and as years pass the subset of Ada allowed in SPARK code grows).