Rust provides a very good system for writing asynchronous code (what with borrowing and such). But there is a gap for writing bigger systems that are inherently asynchronous.
I am not taking about concurrent calculations like in rayon or “simple” ownership with the RWLocks that we have but something more complicated which is very much harder to write especially with the current macro system.
I am taking about Monitor
's or a thread safe struct. Some think that a monitor is little more than a mutex-bounded struct. Where all function calls are done when there is only one thread in the struct. However, they are really much more than this.
Problem: many threads want to access an instance of a class but only when the instance is ready for them and only one at a time.
Monitors solve this by having the following features.
- Functions are either monitored or monitoring. A monitored function (default) is a function that will only enter once all calls done before it happened have entered and either left or blocked. A _monitoring_function is one that can only do reads and is non-blocking (like regular functions).
From now on the bellow is only about monitored functions.
- Every function call is mutually exclusive to all other function calls. (even to different functions)
- Functions are entered in FIFO order unless otherwise told.
- A function can either finish its run to completion or wait for some condition to be true or wait for a call to some other function has completed.
- Recursive calls work, meaning that a thread already in the object can call other monitored functions without waiting.
For consistencies sake there is also need of some guarantees:
- Threads never wake up until what they are waiting for is true
- Once a thread has exited the monitor the next thread to run is deterministic. This is based on what the task just did: finished last function call in the monitor, waited for one of an ordered list of functions to be called and completed, waiting on a condition queue (with lambda), or signalling a queue.
- A reasonable ordering would be “the task at the front of the signaled queue”, “signaler”, “tasks waiting to enter”. When waiting on an ordered list you are implicitly waiting on a new queue that is implicitly signaled just after that task exits the function.
- Lastly, no “barging”. This is described as making sure that when a task is leaving the mutex to “passes” ownership to the next task if there is one so that new tasks coming in don’t see a non-owned lock.
How does this sound as something that might be useful in the standard library. I think that it is a really useful tool for concurrent and parallel programming. I am currently trying to see how feasible it is in “user-land” but I don’t think that the ordering and no barging guarantees are. However, those are some of the most important which is why I want to idea to start rolling.