Setting slice values to scalar

I wanted to do something like this:

x[a..b] = 0;
where a and b are variables, not constants.

There doesn’t seem to be a well-known, easy way to do so (rust-beginners didn’t know anything close).

IMO this is a fairly ergonomic way to initialize a range equal to some constant.

1 Like

What’s wrong with the obvious

for i in a..b { x[i] = 0 }

?

What's wrong with the obvious

It's longer, more bug-prone, it takes more time to read it and understand it's correct.

While i like a little bit of syntax sugar, too much of it is bad for health readability. Affecting a integer to a slice just makes no sense to me.

You can very quickly learn its meaning and it will start to make lot of sense to you too :slight_smile: This is D language:

void main() {
    auto arr = [1, 2, 3];
    arr[1 .. 3] = 10;
    assert(arr == [1, 10, 10]);
}

Somewhat related:

There’s a safe wrapper around memcpy by the name of slice::copy_from_slice. However, there’s no safe wrapper around memset.

@leonardo

What I expect from a really strongly typed language like Rust is exactly preventing this.

Sorry, I don’t like that “save” button a lot :slight_smile:

Strong typing is not a religion, it has its purposes. Do you prefer something like this in Rust?

x[a … b].slice_assign(0);

One difference is that slices of arbitrary Copy types can be copied with memcpy, while memset only makes sense for single-byte types (or if all bytes happen to be set to the same value, which is extremely rare except when zeroing). Expressing the latter in the type system is not possible today.

1 Like

I think even having a safe memset for &[u8] would be a great boon for embedded devs.

2 Likes

I believe a loop gets optimized to memset as it is, so there’s no need to call it explicitly.

I still support the notion that slice.memset(42) is more readable than for el in slice { *el = 42; }

2 Likes

Perhaps, but it’s a less pressing need because it’s more narrow, that’s all I was saying.

1 Like

Drafts of RFC 1419 proposed a safe memset wrapper, slice.fill(value). This was later removed from the RFC so the other parts could be merged without waiting for the details of fill to be worked out.

A good next step would be for someone to write a new RFC for just the fill method, taking into account the previous discussion on RFC 1419.

10 Likes

Since it hasn’t been mentioned, the idiomatic way to do this now IMO is using a for loop with a slice iterator:

for elt in &mut x[a..b] {
    *elt = 0;
}

I’d be thrilled if either of IndexAssign or a method like .set(0), .fill(0) or anything like that appeared.

1 Like

It doesn’t seem like it conveys what I want to say to the compiler.

What I want to say is “set this range of values equal to the right hand side”. What the for loop says is “set each value equal to the right hand side”.

Suppose the example were more complicated. A natural reading of this code would be “Create a random number and set the a…b range of x equal to it”:

x[a..b] = rand::thread_rng().gen();

A natural reading of this code would be “Create b-a random numbers in series and set the a…b indices of x equal to the corresponding random number”:

for i in a..b {
    x[i] = rand::thread_rng().gen();
}

You can of course express the alternative:

let rn = rand::thread_rng().gen();
for i in a..b {
    x[i] = rn;
}

but that’s still subtly different - “Set each a…b value of x to rn”.

Eg consider the situation where x refers to a compressed series where setting a range of the same value is an O(1) operation, but setting individual entries is an O(n) operation. I’m somewhat dubious that that sort of situation is prevalent enough to expose an extra trait for setting a range to a type equal to the individual entries, but at least an implicit memset for basic types would be some nice syntactic sugar.

The “fill” solution suggested above seems less clean semantically but if not overloading the assignment operator is a core tenet of the language then I think it’s still a step up from where it is now, since there are already helper functions for slices.

If the compiler is smart enough to take a look at a for loop and optimize away the whole thing and replace it with a call to the compiler-intrinsic memset, that’s more intelligence than I’d assume.

EDIT: One more point I’d add is that with the “set range to scalar” or “fill” solution, the compiler / fill function would be free to do the operation in parallel (if possible) without violating the apparent semantics of the language. Conversely, with a for loop, the implicit copy() called on the scalar could hypothetically have side effects which were intended to be executed in series. Inspecting the copy function to determine if a parallel optimization of the for loop is safe might not be possible if it’s pulled in from a shared binary.

2 Likes

It conveys it to me, although I'd argue this is more in the context of numeric programming and I could see why .fill might be preferred in std. I'd like to weigh in that enabling this syntax for ndarray would be a pretty big ergonomic win, currently we have

x.slice_mut(s![a..b]).fill(0.);

or

x.slice_mut(s![a..b]).assign(&arr1(&[6., 2.]));

when assigning to something of higher dimensionality.

I skimmed through the past proposal of fill(), and it seems to me that most of the ambiguity was about how much should be guaranteed about the level of optimization. IMO fill<T: Clone>(&mut self, val: &T) with a naive loop implementation is all that’s needed, purely for the sake of code clarity. Between monomorphization, inlining and LLVM heuristics, I’d be utterly unsurprised if it compiles even better than memset for small copy types.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.