Thread safe structs

Rust provides a very good system for writing asynchronous code (what with borrowing and such). But there is a gap for writing bigger systems that are inherently asynchronous.

I am not taking about concurrent calculations like in rayon or “simple” ownership with the RWLocks that we have but something more complicated which is very much harder to write especially with the current macro system.

I am taking about Monitor's or a thread safe struct. Some think that a monitor is little more than a mutex-bounded struct. Where all function calls are done when there is only one thread in the struct. However, they are really much more than this.


Problem: many threads want to access an instance of a class but only when the instance is ready for them and only one at a time.

Monitors solve this by having the following features.

  1. Functions are either monitored or monitoring. A monitored function (default) is a function that will only enter once all calls done before it happened have entered and either left or blocked. A _monitoring_function is one that can only do reads and is non-blocking (like regular functions).

From now on the bellow is only about monitored functions.

  1. Every function call is mutually exclusive to all other function calls. (even to different functions)
  2. Functions are entered in FIFO order unless otherwise told.
  3. A function can either finish its run to completion or wait for some condition to be true or wait for a call to some other function has completed.
  4. Recursive calls work, meaning that a thread already in the object can call other monitored functions without waiting.

For consistencies sake there is also need of some guarantees:

  1. Threads never wake up until what they are waiting for is true
  2. Once a thread has exited the monitor the next thread to run is deterministic. This is based on what the task just did: finished last function call in the monitor, waited for one of an ordered list of functions to be called and completed, waiting on a condition queue (with lambda), or signalling a queue.
  3. A reasonable ordering would be “the task at the front of the signaled queue”, “signaler”, “tasks waiting to enter”. When waiting on an ordered list you are implicitly waiting on a new queue that is implicitly signaled just after that task exits the function.
  4. Lastly, no “barging”. This is described as making sure that when a task is leaving the mutex to “passes” ownership to the next task if there is one so that new tasks coming in don’t see a non-owned lock.

How does this sound as something that might be useful in the standard library. I think that it is a really useful tool for concurrent and parallel programming. I am currently trying to see how feasible it is in “user-land” but I don’t think that the ordering and no barging guarantees are. However, those are some of the most important which is why I want to idea to start rolling.

My read is that you want something like java’s synchronized, which you can implement with a reentrant mutex:

struct K {
    // ..
    lock: ReentrantMutex<()>,
}

impl K {
    fn foo(&self) -> T {
        let lock = self.lock.lock();

        // done fussing about synchronously
        drop(lock);
    }
}

You might need to fuss with the primitive to get the FIFO semantics. I think this is something that you can definitely put in a crate. Try your hand at making a custom proc-macro like #[monitored].

As far as wether is belongs in std… I think std::sync has more than enough primitives already, and this is a pretty bespoke thing that you can build with a Mutex<()>. I suggest making a crate instead.

3 Likes

So from what I understand that’s a function-oriented syntax sugar around RwLock, but with more guarantees about order in which the lock is acquired?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.