Remix.run Logo
juliangmp 13 hours ago

I found that a lot of the problems I had been having with mutexes, stem from the fact that traditionally the mutex and the data it protects are separate. Bolting them together, like Rust's Mutex<T> does, solves a lot these problems. It let's you write normal, synchronous code and leave the locking up to the caller, but without making it a nightmare. You can't even access the data without locking the mutex.

This isn't an attack on the (very well written) article though. Just wanted to add my two cents.

torginus 12 hours ago | parent | next [-]

Mutexes suffer from a host of problems, and imo are not a very good concurrency primitive - they were designed to turn single-threaded code into multi-threaded. With todays 8+ cores in most systems, usually a single point of contention quickly becomes a problem.

They're liable to deadlocks/livelocks, and sometimes not only with other explicitly Mutex-like things (it might happen some library you use has a lock hidden deep inside).

They're also often backed byOS primitives (with big overheads) with inconsistent behaviors between platforms (spinlocks, waiting etc). We've run into an issue with .NET, that their version of Mutex didn't wake up the blocked thread on Linux as fast as on Windows, meaning we needed about 100x the time to serve a request as the thread was sleeping too long.

There are questions like when to use spinlocks and when to go to wait sleep, which unfortunately the developer has to answer.

Not assigning blame here, just pointing out that threading primitives and behaviors don't translate perfectly between OSes.

Multi-threading is hard, other solutions like queues suffer from issues like backpressure.

That's why I'm skeptical about Rust's fearless concurrency promise - none of these bugs are solved by just figuring out data races - which are a huge issue, but not the only one.

adwn 11 hours ago | parent [-]

Your view on mutex performance and overhead is outdated, at least for the major platforms: The Rust standard library mutex only requires 5 bytes, doesn't allocate, and only does a syscall on contention. The mutex implementation in the parking_lot library requires just 1 byte per mutex (and doesn't allocate and only does a syscall on contention). This enables very fine-grained, efficient locking and low contention.

torginus 10 hours ago | parent | next [-]

These are OS primitives I'm talking about - I haven't checked out the standard library version but the parking_lot version uses a spinlock with thread sleep when the wait times get too high - it has no way of getting notified when the mutex gets unblocked nor does it support priority inversion.

It seems it's optimized for scenarios with high performance compute heavy code, and short critical sections.

These assumptions may let it win benchmarks, but don't cover the use cases of all users. To illustrate why this is bad, imagine if you have a Mutex protected resource that becomes available after 10us on average. This locks spins 10 times checking if it has become available )(likely <1us) then yields the thread. The OS (lets assume Linux) wont wake it up the thread until the next scheduler tick, and its under no obligation to do so even then (and has no idea it should). But even best-case, you're left waiting 10ms, which is a typical scheduler tick.

In contrast OS based solutions are expensive but not that expensive, let's say that add 1us to the wait. Then you would wait 11us for the resource.

A method call taking 10ms and one taking 15 us is a factor of 60x, which can potentially kill your performance.

You as the user of the library are implicitly buying into these assumptions which may not hold for your case.

There's also nothing in Rust that protects you from deadlocks with 100% certainty. You can fuzz them out, and use helpers, but you can do that in any language.

So you do need to be mindful of how your mutex works, if you want to build a system as good as the one it replaces.

galangalalgol 9 hours ago | parent | next [-]

The best practices I adopt for rust avoid the use of mutex whenever possible precisely because of how easy a deadlock is. It turns out it is always possible. There are entire languages the disallow any mutable state, much less shared mutable state. The question becomes how much performance are you willing to sacrifice to avoid the mutex. By starting with no shared mutable state and adding it when something is too slow, you end up with very few mutexes.

adwn 9 hours ago | parent [-]

> avoid the use of mutex […] It turns out it is always possible

How would you handle the archetypical example of a money transfer between two bank accounts, in which 100 units of money need to be subtracted from one account and atomically added to another account, after checking that the first account contains at least 100 units?

vrmiguel an hour ago | parent | next [-]

Since the thread mentions Rust: in Rust, you often replace Mutexes with channels.

In your case, you could have a channel where the Receiver is the only part of the code that transfers anything. It'd receive a message Transfer { from: Account, to: Account, amount: Amount } and do the required work. Any other threads would therefore only have copies of the Sender handle. Concurrent sends would be serialized through the queue's buffering.

I'm not suggesting this is an ideal way of doing it

galangalalgol 8 hours ago | parent | prev [-]

The simplest pure functional way would be to copy the whole database instantiating a new copy with the desired change if the condition was met. That obviously doesn't scale, which is where the performance thing comes in. A still pure way would be to use a persistent tree or hash mapped trie that allows efficient reuse of the original db. There are times a purely functional approach doesn't perform well enough, but even with large scale entity component type systems in both rust and c++, the number of times I've had to use a mutex to be performant is small. Atomic is much more common, but still not common. Persistent data structures alleviate most of the need.

pas 4 hours ago | parent [-]

pure or not eventually this comes down to durability, no?

and the way to do it is to either have some kind single-point-of-control (designated actor or single-threaded executor) or mark the data (ie. use some concurrency control primitive either wrapping the data or in some dedicated place where the executors check [like JVM's safepoints])

using consistent hashing these hypothetical accounts could be allocated to actors and then each transaction is managed by the actor of the source (ie. where the money is sent from, where the check needs to happen), with their own durable WAL, and periodically these are aggregated

(or course then the locking is hidden in the maintenance of the hashring as eating philosophers are added/removed)

kragen 24 minutes ago | parent [-]

Eliminating the durability constraint doesn't make it any easier to program, just easier to get good performance on.

Distributing accounts among different actors, without two-phase commit or its moral equivalent, enables check kiting.

Someone 9 hours ago | parent | prev | next [-]

> A method call taking 10ms and one taking 15 us is a factor of 60x

667 (a thousand 15μs calls take 15ms)

adwn 9 hours ago | parent | prev [-]

> […] but don't cover the use cases of all users.

No single concurrency primitive covers all use cases. I was addressing your misconceptions about mutex performance and overhead, not whether mutexes are the best solution to your particular problem.

> […] it has no way of getting notified when the mutex gets unblocked […] The OS (lets assume Linux) wont wake it up the thread until the next scheduler tick, and its under no obligation to do so even then (and has no idea it should).

You've misunderstood the parking_lot implementation. When thread B tries to lock a mutex that's currently locked by thread A, then, after spinning a few cycles, thread B "parks" itself, i.e., it asks the kernel to remove it from the Runnable task queue. On Linux, this is done using the futex syscall. When thread A unlocks the mutex, it detects that another thread is waiting on that mutex. Thread A takes one thread from the queue of waiting threads and "unparks" it, i.e., it asks the kernel to move it into the Runnable task queue. The kernel is notified immediately, and if there's a free CPU core available, will tend to dispatch the thread to that core. On a non-realtime OS, there's no guarantee how long it takes for an unblocked thread to be scheduled again, but that's the case for all concurrency primitives.

ahoka 7 hours ago | parent | prev | next [-]

It's called a futex and supported by both Linux and Windows since ages.

adwn 7 hours ago | parent [-]

The 1-byte-per-mutex parking_lot implementation works even on systems that don't provide a futex syscall or equivalent.

magicalhippo 10 hours ago | parent | prev [-]

How does it avoid cache contention with just a few bytes per mutex? That is, multiple mutex instances sharing a cache line. Say I have a structure with multiple int32 counters protected by their own mutex.

torginus 10 hours ago | parent | next [-]

By not avoiding it. And a year later you get to write a blog post about how you discovered and fixed this phenomenon hitherto unknown to computer science.

adwn 10 hours ago | parent | prev | next [-]

Cache contention is (mostly) orthogonal to your locking strategy. If anything, fine-grained locking has the potential to improve cache contention, because

1) the mutex byte/word is more likely to be in the same cache line as the data you want to access anyway, and

2) different threads are more likely to write to mutex bytes/words in different cache lines, whereas in coarse-grained locking, different threads will fight for exclusive access over the cache line containing that one, global mutex.

@magicalhippo: Since I'm comment-rate-throttled, here's my answer to your question:

Typically, you'd artificially increase the size and alignment of the structure:

    #[repr(align(64))]
    struct Status {
        counter: Mutex<u32>,
    }
This struct now has an alignment of 64, and is also 64 bytes in size (instead of just the 4+1 required for Mutex<u32>), which guarantees that it's alone in the cache line. This is wasteful from a memory perspective, but can be worth it from a performance perspective. As often when it comes to optimization, it very heavily depends on the specific case whether this makes your program faster or slower.
magicalhippo 9 hours ago | parent [-]

> different threads are more likely to write to mutex bytes/words in different cache lines

If you got small objects and sequential allocation, that's not a given in my experience.

Like in my example, the ints could be allocated one per thread to indicate some per thread status, and the main UI thread wants to read them every now and then hence they're protected by a mutex.

If they're allocated sequentially, the mutexes end up sharing cache lines and hence lead to effective contention, even though there's almost no "actual" contention.

Yes yes, for a single int you might want to use an atomic variable but this is just for demonstration purposes. I've seen this play out in real code several times, where instead of ints it was a couple of pointers say.

I don't know Rust though, so just curious.

gpderetta 9 hours ago | parent [-]

The issue might be allocating the int contiguously in the first place. No language magic is going to help you avoid thinking about mechanical sympathy.

And allocating the int contiguously might actually be the right solution is the cost of sporadic false sharing is less than the cost of wasting memory.

There's no silver bullet.

magicalhippo 9 hours ago | parent [-]

But the mutex encapsulates the int, so if the mutex ensured it occupied a multiple of cache lines, there would be no contention. At the very small cost of a few bytes of memory.

gpderetta 9 hours ago | parent [-]

the mutex forcing alignment would be extremely wasteful. FWIW, I have used 1-bit spin locks.

10 hours ago | parent | prev [-]
[deleted]
kragen 8 hours ago | parent | prev | next [-]

Traditionally traditionally, monitors were declared together with the data they contained, and the compiler enforced that the data was not accessed outside the monitor. Per Brinch Hansen wrote a rather bitter broadside against Java's concurrency model when it came out.

csb6 4 hours ago | parent [-]

Was this the article?

http://brinch-hansen.net/papers/1999b.pdf

kragen 4 hours ago | parent [-]

This is a toned-down, but still scathing, version of what I remember reading.

Nauxuron 13 hours ago | parent | prev | next [-]

> You can't even access the data without locking the mutex.

It's even nicer than that: you can actually access data without locking the mutex, because while you hold a mutable borrow to the mutex, Rust statically guarantees that no one else can acquire locks on the mutex.

https://doc.rust-lang.org/std/sync/struct.Mutex.html#method....

jstimpfle 12 hours ago | parent [-]

Given a data item of non-thread safe type (i.e. not Mutex<T> etc), the borrow checker checks that there's only ever one mutable reference to it. This doesn't solve concurrency as it prevents multiple threads from even having the ability to access that data.

Mutex is for where you have that ability, and ensures at runtime that accesses get serialized.

dwattttt 11 hours ago | parent [-]

The maybe unexpected point is that if you know you're the only one who has a reference to a Mutex (i.e. you have a &mut), you don't need to bother lock it; if no one else knows about the Mutex, there's no one else who could lock it. It comes up when you're setting things up and haven't shared the Mutex yet.

This means no atomic operations or syscalls or what have you.

jstimpfle 11 hours ago | parent [-]

Do you have an example? I don't program in Rust, but I imagine I'd rarely get into that situation. Either my variable is a local (in a function) in which case I can tell pretty easily whether I'm the only one accessing it. Or, the data is linked globally in a data structure and the only way to access it safely is by knowing exactly what you're doing and what the other threads are doing. How is Rust going to help here? I imagine it's only making the optimal thing harder to achieve.

I can see that there are some cases where you have heap-data that is only visible in the current thread, and the borrow checker might be able to see that. But I can imagine that there are at least as many cases where it would only get in the way and probably nudge me towards unnecessary ceremony, including run-time overhead.

dwattttt an hour ago | parent | next [-]

It's relevant when you have more complex objects, such as ones that contain independent mutexes that lock different sections of data.

You want the object to present its valid operations, but the object could also be constructed in single or multithreaded situations.

So you'd offer two APIs; one which requires a shared reference, and internally locks, and a second which requires a mutable reference, but does no locking.

Internally the shared reference API would just lock the required mutexes, then forward to the mutable reference API.

adwn 11 hours ago | parent | prev | next [-]

When you construct an object containing a mutex, you have exclusive access to it, so you can initialize it without locking the mutex. When you're done, you publish/share the object, thereby losing exclusive access.

    struct Entry {
        msg: Mutex<String>,
    }
    ...
    // Construct a new object on the stack:
    let mut object = Entry { msg: Mutex::new(String::new()) };
    // Exclusive access, so no locking needed here:
    let mutable_msg = object.msg.get_mut();
    format_message(mutable_msg, ...);
    ...
    // Publish the object by moving it somewhere else, possibly on the heap:
    global_data.add_entry(object);
    // From now on, accessing the msg field would require locking the mutex
jstimpfle 7 hours ago | parent [-]

Initialization is always special. A mutex can't protect that which doesn't exist yet. The right way to initialize your object would be to construct the message first, then construct the composite type that combines the message with a mutex. This doesn't require locking a mutex, even without any borrow checker or other cleverness.

adwn 7 hours ago | parent [-]

Dude, it's a simplified example, of course you can poke holes into it. Here, let me help you fill in the gaps:

    let mut object = prepare_generic_entry(general_settings);
    let mutable_msg = object.msg.get_mut();
    do_specific_message_modification(mutable_msg, special_settings);
The point is, that there are situations where you have exclusive access to a mutex, and in those situations you can safely access the protected data without having to lock the mutex.
jstimpfle 7 hours ago | parent [-]

Sorry, I don't find that convincing but rather construed. This still seems like "constructor" type code, so the final object is not ready and locking should not happen before all the protected fields are constructed.

There may be other situations where you have an object in a specific state that makes it effectively owned by a thread, which might make it possible to forgo locking it. These are all very ad-hoc situations, most of them would surely be very hard to model using the borrow checker, and avoiding a lock would most likely not be worth the hassle anyway.

Not sure how this can help me reduce complexity or improve performance of my software.

imtringued 10 hours ago | parent | prev [-]

>I don't program in Rust, but I imagine I'd rarely get into that situation.

Are you sure? Isn't having data be local to a thread the most common situation, with data sharing being the exception?

>Or, the data is linked globally in a data structure and the only way to access it safely is by knowing exactly what you're doing and what the other threads are doing.

That's exactly what the borrow checker does. It tracks how many mutable references you have to your data structure at compile time. This means you can be sure what is local and what is shared.

Meanwhile without the borrow checker you always have to assume there is a remote probability that your mental model is wrong and that everything goes wrong anyways. That's mentally exhausting. If something goes wrong, it is better to only have to check the places where you know things can go wrong, rather than the entire code base.

jstimpfle 8 hours ago | parent [-]

I use lots of locals but only to make my code very "local", i.e. fine-grained, editable and clear, using lots of temporary variable. No complicated expressions. That's all immutable data (after initialization). I rarely take the address of such data but make lots of copies. If I take its address, then as an immutable pointer, maybe not in the type system but at least in spirit.

I keep very little state on the stack -- mostly implicit stuff like mutex lock / mutex unlock. By "state" I mean object type things that get mutated or that need cleanup. I always have a "database schema" of my global state in mind. I define lots of explicit struct types instead of hiding state as locals in functions. I've found this approach of minimizing local state to be the right pattern because it enables composability. I'm now free to factor functionality into separate functions. I can much more freely change and improve control flow. With this approach it's quite rare that I produce bugs while refactoring.

So yes, I have lots of locals but I share basically none of them with other threads. Also, I avoid writing any code that blocks on other threads (other than maybe locking a mutex), so there's another reason why I would not intentionally share a local with another thread. Anything that will be shared with another thread should be allocated on the heap just for the reason that we want to avoid blocking on other threads.

In that sense, the borrow checker is a tool that would allow me to write code more easily that I never wanted written in the first place.

mgaunard 13 hours ago | parent | prev | next [-]

I find it better to model that as an Actor than a mutex, but I guess it's inherently the same thing, except the actor also allows asynchronous operations.

gpderetta 12 hours ago | parent [-]

You can go full circle and also make operations on a mutex asynchronous. Hence the realization that message passing and shared memory are truly dual.

mgaunard 11 hours ago | parent [-]

The very idea of a mutex is that it is synchronous. You wait until you can acquire the mutex.

If it's asynchronous, it's not a mutex anymore, or it's just used to synchronously setup some other asynchronous mechanism.

gpderetta 10 hours ago | parent [-]

A mutex is a way to guarantee mutual exclusion nothing more nothing less; You can recover synchronous behaviour if you really want:

    synchronized<Something> something;
    ...
    co_await something.async_visit([&](Something& x) {
        /* critical section here */ 
    });
mgaunard 10 hours ago | parent [-]

that isn't a mutex, that's delegating work asynchronously and delegating something else to run when it is complete (the implicitly defined continuation through coroutines).

In systems programming parlance, a mutex is a resource which can be acquired and released, acquired exactly once, and blocks on acquire if already acquired.

gpderetta 10 hours ago | parent [-]

Do a CPS transform of your typical std::mutex critical section and you'll find they are exactly the same.

mgaunard 9 hours ago | parent [-]

They're not, the interactions with the memory model are different, as are the guarantees.

CPS shouldn't be able to deadlock for example?

gpderetta 9 hours ago | parent [-]

CPS can trivially deadlock for all meaningful definitions of deadlock.

Would you consider this a mutex?

   async_mutex mux;

   co_await mux.lock();
   /* critical section */
   co_await mux.unlock();
   
What about: my_mutex mux;

   {
      std::lock_guard _{mux};
      /* critical section */
   }
where the code runs in a user space fiber.

Would you consider boost synchronized a mutex?

Don't confuse the semantics with the implementation details (yes async/await leaks implementation details).

indigo945 12 hours ago | parent | prev | next [-]

This doesn't solve the deadlock problem, however.

dist-epoch 13 hours ago | parent | prev [-]

Sounds like the Java synchronized class.

masklinn 8 hours ago | parent | next [-]

No. It’s not a property of the type so you can have multiple items under a mutex and you’re not at the mercy of whoever wrote it, it works fine with POD types, it does not force a lock / unlock on each method call (instead the compiler essentially ensures you hold the lock before you can access the data), and the borrow checker is there to ensure you can not leak any sort of sub-states, even though you can call all sorts of helpers which have no requirement to be aware of the locking.

It’s what synchronized classes wish they had been, maybe.

the_gipsy 10 hours ago | parent | prev [-]

Not at all. With rust you cannot accidentally leak a reference, and here's the killer: it guarantees these properties at compile time.