Concurrent iterators/types


The idea is to have a built in iterator that runs itself concurrently and automatically creates an optimal number of threads based on your machine.


let my_map = HashMap::new(); for (k,v) in my_map.iter_conc(){ //Run iterations in parallel


//The loop above must finish before you go on.

Furthermore, the data type we are iterating over would need to be lock-free. Suggest applying either a keyword or create concurrent types. Read/Write locks are applied automatically to these types.

let my_map = HashMap::new(); let mutate_map = ConcurrentHashMap::new();

for (k,v) in my_map.iter_conc(){ mutate_map.insert(); }



It would look nicer if there would be ThreadPool and we could do:

let vec = vec![1,2,3,4,5,6];
let pool = ThreadPool::with_size(4);

for v in pool.iterate_over(vec) {
  // No more than 4 threads will work at the same point


This would pretty much require internal iterators be added back. This is, incidentally, how D does something similar.

That said, if the goal is to “automagically” parallelise calculations, it’d probably be constructive to look at Halide. It’s been a while since I watched the accompanying video, but from what I remember: if you want efficiency, it’s not as simple as just slapping a “do me in parallel” marker on the code.

1 Like


As I understand it, C#'s LINQ API has this with .AsParallel()

@DanielKeep I’m not clear why internal iteration would be necessary for this?



Correct. C# also has Parallel.ForEach which is a little different than using LINQ but both work great.



He probably suggests to implement Halide-like library in Rust (which should be possible, as they have done this using C++). This would be nice for anyone who would use Rust in gamedev (and I think that this can be future for Rust).



See also Java 8’s streams.



How do you transform that for loop such that the body of the loop runs in multiple threads? Remember that the only link the iterator (which the example strongly indicates is where this magic happens) only talks to the loop via Iterator::next.

I admit I was inaccurate: you don’t necessarily need internal iteration. You could bake magic directly into the compiler. But if you wanted it to be something that’s more broadly useful, internal iteration would be the simplest way to go.

Just to make sure we’re talking about the same thing here, I mean something like:

trait InternalIterator {
    type Item;
    fn for_each<F>(&mut self, blk: F) where F: Fn(Item);


While I agree that internal iteration is the easiest way to do this, something like OpenMP for C/C++ does do parallelisation of (simplistic) external iterator based for loops, so it is possible.

That said, I think trying to reuse the for construct directly is not necessarily the right way to do this; I’d prefer making iterator consumers that take closures to run in parallel, e.g. at the most basic, we could have a function

impl ThreadPool {
    /// Execute `f` on each element of `it`, in parallel
    fn par_for<It, F>(&self, it: It, f: F) 
        where It::Item: Sync + Send,
              F: Fn(It::Item) + Sync {
        // ...

used like pool.par_for(some_map.iter(), |&(k, v)| { ... }) or pool.par_for(some_slice.chunks(10), |elems| { ... }) (run on 10 elements at a time) etc.

This approach is more DRY than having each type yield a hard-coded concurrent iterator, and there’s no particular reason we can’t define parallel versions of the adaptors like pool.par_map that returns an Iterator so that, e.g.

let vec = pool.par_map(some_iter, |elem| { ... }).collect();

works, where the closure is run in parallel across some_iter.

(That said, internal iteration does mean that a type can more easily reuse internal structure to divide work out.)



I actually think Halide would make more sense as an actual language with a compiler. As it stands, if you want to compile a Halide function to an object file, you have to write a C++ program that builds the Halide program in-memory, and then feeds that to the Halide compiler library. Then, you run that program, which outputs an object file, which you link into the actual program… it’s a little bit silly. :stuck_out_tongue:

Having gone through the Halide intro video again, and looked at some of the examples, I’m not sure if Rust can really make use of the ideas in Halide unless it’s willing to introduce a whole new kind of construct (i.e. first-class kernel fns). It’s not obvious from the basic examples, but it can do things like change traversal order, precompute different parts of subexpressions (possibly redundantly), potentially storing the result at a different level of the expression for efficiency…

I mean, I can see ways of doing this at runtime… but at compile time? I’m not sure how you’d represent that without major syntax changes.


closed #11

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.