Implement RangeBounds for &Range*

I have encountered a situation very similar to a Stack Overflow question. Essentially, I want to write a method that accepts either an owned or borrowed range as an argument, to avoid users from having to write f(&(1..32)). Naturally, we can use Borrow to do this:

fn f<T: Debug, U: Debug + RangeBounds<T>, V: Borrow<U>>(range: V) {
    let range = range.borrow();

However, the compiler cannot infer the type of U since it could be anything, not just the dereferenced argument. Thus, when calling it with a borrowed range, we need to provide the type:

    let range = 1..2;
    f::<_, Range<u32>, _>(&range);

If &Range<T>, and others were to implement RangeBound, the problem would become moot, since one could just write:

fn f<T: Debug, U: Debug + RangeBounds<T>>(range: U) {

Are there any gotchas that would make this a bad idea?


I'd like to have this for another reason (and was slightly surprised it doesn't exist): if you want to reuse a range (eg. to index two slices in lockstep[1]) you (infamously) have to either resort to cloning or use two separate range literals:

fn foo(a: &[i32], b: &[i32]) {
    let r = compute_start()..compute_end();
    // Or
    let (start, end) = (compute_start(), compute_end());

  1. My real-world use case: the color and depth buffers in a 3D rasterizer ↩ī¸Ž

You can restrict the options that inference can consider:

1 Like

Oh, that's clever, that would work in my case. Thanks!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.