Idea: `std::sync::Mutex::sort_key(&self) -> usize` for lock ordering

This is an idea for helping solve deadlock issues when writing multi-threaded code where you need to hold multiple locks at the same time. To review, one way of getting deadlock when you need to get hold of multiple locks is as follows:

  • Threads A and B need to hold locks c and d in order to get work done.
  • Thread A grabs lock c.
  • Thread B grabs lock d.
  • Thread A tries to grab lock d, but is put to sleep because B already has d.
  • Thread B tries to grab lock c, but is put to sleep because A already has c.
  • And voila, deadlock.

If we had a total ordering over all locks in a program, then threads could choose to sort the locks by the given keys, and always acquire them in the same order. This is the standard solution given in http://www.informit.com/articles/article.aspx?p=1626979&seqNum=4, https://stackoverflow.com/questions/1951275/would-you-explain-lock-ordering, and a whole bunch of other resources that I’m not going to list out here.

So my thought is that all mutex-like objects adopt the following trait:

pub trait MutexOrder {
    fn mutex_key(&self) -> usize;
}

The way that I envision this, the key is actually the location in memory of the mutex. AFAIK, all of the platforms that rust targets use flat address spaces, so the memory location can be treated as being unique to the particular mutex instance, so if you compare two keys, you end up with a total ordering (including knowing if two mutexes are actually the same mutex). If this isn’t true, please correct me.

Assuming that all mutex-like objects adopted that particular trait, we could do something like the following (super simplified code that won’t compile, due to space constraints on this forum):

use std::sync::{Mutex, RwLock};
use parking_lot::ReentrantMutex;
use std::thread::spawn;

fn main() {
    /// Real code would take up too much space, use your imagination
    /// for how this would really be done.
    let m1 = Mutex::new(0);
    let m2 = RwLock::new(1);
    let m3 = ReentrantMutex::new(3);

    let jh = spawn(move || {
        // The following won't work as they are different types.  Pretend
        // I did this right with trait objects, or used a heterogeneous
        // collections crate, and that they are wrapped in Arc, etc.
        let mut my_mutexes = vec![m1, m2, m3];
        my_mutexes.sort_by_key(|k| k.mutex_key());
        let mut a = my_mutexes[0].lock();
        let mut b = my_mutexes[1].lock();
        let mut c = my_mutexes[2].lock();
        
        // Do real work, knowing that all threads always acquire the locks
        // in the same order, regardless of how they got hold of the
        // locks.
    });

    jh.join();
}

The main downside that I see to this is that downcasting to the original trait objects will require implementing std::any::Any on each type. There may be additional issues as well that I haven’t thought of. That said, I think most of them could be solved given some effort. It may even be possible to create a specialized macro that handles the sorting and locking of the mutexes in one go, hiding away any of the really difficult bits.

So, thoughts about this?

3 Likes

I think it sounds like a great and useful idea, but better suited for a library than the stdlib, at least for now, especially given that there is a relatively straightforward path to implementation that anyone could take. Also, does using the memory location work if the type can move? I would think that it wouldn’t matter, because any shared mutex shouldn’t be move-able.

You’re probably right that it should be tested in a separate library first, just to get the bugs ironed out. Maybe in parking_lot. I’ll open up an issue for this.

You can also just do this yourself in your own library:

fn sort_key<T>(m: &Mutex<T>) -> usize {
    m as usize
}
1 Like

OK, but mutexes are often shared; e.g. Arc<Mutex<_>>. Is there an easy way to get an identifier for the mutex here? I’ve tried the following and it didn’t work:

use std::sync::{Arc, Mutex};

#[inline]
fn pointer_as_usize<T>(p: *const T) -> usize {
    p as usize
}

fn main() {
    let my_mutex = Mutex::new(3);
    let my_mutex_ptr: *const Mutex<usize> = &my_mutex;
    let my_other_mutex_ptr = pointer_as_usize(&my_mutex);
    println!("my_mutex_ptr       = {:?}", my_mutex_ptr);
    println!("my_other_mutex_ptr = 0x{:x?}", my_other_mutex_ptr);

    let mut shared_mutex = Arc::new(my_mutex);
    {
        let borrowed_ptr: *const Mutex<usize> = Arc::get_mut(&mut shared_mutex).unwrap();
        println!("borrowed_ptr = {:?}", borrowed_ptr);
    }

    let mut shared_mutex_2 = shared_mutex.clone();
    {
        let borrowed_ptr: *const Mutex<usize> = Arc::get_mut(&mut shared_mutex_2).unwrap();
        println!("borrowed_ptr = {:?}", borrowed_ptr);
    }
}

If you remove everything from shared_mutex_2 and below, it works, but otherwise it will panic as Arc::get_mut() returns None if more than one Arc or Weak points to the same object.

Why are you using Arc::get_mut() anyway? Sure, you can’t get &mut, but you don’t need it.

You can use as_ref instead of get_mut.

Whoops, you’re right! I kept making mistakes in my code that prevented it from compiling, so I thought it was only possible via Arc::get_mut() :pensive:

What @CAD97 said actually works better; I had tried doing the same thing as he suggested originally, but made enough syntax errors that I convinced myself that it couldn’t be done.

So, as long as mutexes are always pinned, there is no need for this API suggestion as it is trivial to do it yourself. I humbly withdraw my suggestion.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.