Can Epochs change the definition of a stdlib trait?


#21

That’s what the blanket implementation impl<I, T> Index2<T> for I where I: Index<T> comes in. a[b] would just (ignoring auto-ref and friends) desugar to ::std::ops::Index2::index(a, b), and Index2::index(_) is equivalent to Ok(Index::index(_)) due to the blanket impl.


#22

@jjpe good that you try to good error-handling everywhere. But is there actually something useful you can do to handle array index out of bounds? Because most people clearly don’t think so.

I agree that returning Result<..> is often the right thing to do, but not always.

Unless you avoid all allocation and recursion and are very careful about all code you use and are certain you have sufficient stack-size, this is impossible. That’s a high bar to reach for, just to make your program not panic, and still not sufficient to prove your program is correct. Do you really need that?


#23

Indeed in general it’s impossible to 100% guarantee that panicks won’t occur. But in practice things like OOM are a nonissue unless you have either incredibly demanding software (exists, but is relatively rare across the entire software landscape, and not really applicable here) or unless you have a memory leak somewhere (much more likely in general), or unless the hardware is constrained (not applicable at all here).

So I agree that OOB indexing is a bug, but here’s my use case: I have written an interpreter that has indexing in the language, which is just passed through to Rust. The current implementation of it uses a custom method to do index-like operations, and works fine. But what I want is more distinctive source code for those operations to make the indexing passthrough operations (which happen now anyway) in Rust sources stand out. The Index trait is 95% of the way there, it’s just this still-somewhat-arbitrary-sounding definition around the return type for the .index() method. If the Index trait did not mandate panicking on OOB (it does this through a lack of alternative options), it would fit this use case. The thing is, the interpreter, when it runs in production, shouldn’t panic because an OOB means a faulty input to the interpreter here, not a faulty implementation. Hence the user should be notified, not have the process panic.