Remix.run Logo
torginus 10 hours ago

These are OS primitives I'm talking about - I haven't checked out the standard library version but the parking_lot version uses a spinlock with thread sleep when the wait times get too high - it has no way of getting notified when the mutex gets unblocked nor does it support priority inversion.

It seems it's optimized for scenarios with high performance compute heavy code, and short critical sections.

These assumptions may let it win benchmarks, but don't cover the use cases of all users. To illustrate why this is bad, imagine if you have a Mutex protected resource that becomes available after 10us on average. This locks spins 10 times checking if it has become available )(likely <1us) then yields the thread. The OS (lets assume Linux) wont wake it up the thread until the next scheduler tick, and its under no obligation to do so even then (and has no idea it should). But even best-case, you're left waiting 10ms, which is a typical scheduler tick.

In contrast OS based solutions are expensive but not that expensive, let's say that add 1us to the wait. Then you would wait 11us for the resource.

A method call taking 10ms and one taking 15 us is a factor of 60x, which can potentially kill your performance.

You as the user of the library are implicitly buying into these assumptions which may not hold for your case.

There's also nothing in Rust that protects you from deadlocks with 100% certainty. You can fuzz them out, and use helpers, but you can do that in any language.

So you do need to be mindful of how your mutex works, if you want to build a system as good as the one it replaces.

galangalalgol 9 hours ago | parent | next [-]

The best practices I adopt for rust avoid the use of mutex whenever possible precisely because of how easy a deadlock is. It turns out it is always possible. There are entire languages the disallow any mutable state, much less shared mutable state. The question becomes how much performance are you willing to sacrifice to avoid the mutex. By starting with no shared mutable state and adding it when something is too slow, you end up with very few mutexes.

adwn 9 hours ago | parent [-]

> avoid the use of mutex […] It turns out it is always possible

How would you handle the archetypical example of a money transfer between two bank accounts, in which 100 units of money need to be subtracted from one account and atomically added to another account, after checking that the first account contains at least 100 units?

vrmiguel an hour ago | parent | next [-]

Since the thread mentions Rust: in Rust, you often replace Mutexes with channels.

In your case, you could have a channel where the Receiver is the only part of the code that transfers anything. It'd receive a message Transfer { from: Account, to: Account, amount: Amount } and do the required work. Any other threads would therefore only have copies of the Sender handle. Concurrent sends would be serialized through the queue's buffering.

I'm not suggesting this is an ideal way of doing it

galangalalgol 8 hours ago | parent | prev [-]

The simplest pure functional way would be to copy the whole database instantiating a new copy with the desired change if the condition was met. That obviously doesn't scale, which is where the performance thing comes in. A still pure way would be to use a persistent tree or hash mapped trie that allows efficient reuse of the original db. There are times a purely functional approach doesn't perform well enough, but even with large scale entity component type systems in both rust and c++, the number of times I've had to use a mutex to be performant is small. Atomic is much more common, but still not common. Persistent data structures alleviate most of the need.

pas 4 hours ago | parent [-]

pure or not eventually this comes down to durability, no?

and the way to do it is to either have some kind single-point-of-control (designated actor or single-threaded executor) or mark the data (ie. use some concurrency control primitive either wrapping the data or in some dedicated place where the executors check [like JVM's safepoints])

using consistent hashing these hypothetical accounts could be allocated to actors and then each transaction is managed by the actor of the source (ie. where the money is sent from, where the check needs to happen), with their own durable WAL, and periodically these are aggregated

(or course then the locking is hidden in the maintenance of the hashring as eating philosophers are added/removed)

kragen 23 minutes ago | parent [-]

Eliminating the durability constraint doesn't make it any easier to program, just easier to get good performance on.

Distributing accounts among different actors, without two-phase commit or its moral equivalent, enables check kiting.

Someone 9 hours ago | parent | prev | next [-]

> A method call taking 10ms and one taking 15 us is a factor of 60x

667 (a thousand 15μs calls take 15ms)

adwn 9 hours ago | parent | prev [-]

> […] but don't cover the use cases of all users.

No single concurrency primitive covers all use cases. I was addressing your misconceptions about mutex performance and overhead, not whether mutexes are the best solution to your particular problem.

> […] it has no way of getting notified when the mutex gets unblocked […] The OS (lets assume Linux) wont wake it up the thread until the next scheduler tick, and its under no obligation to do so even then (and has no idea it should).

You've misunderstood the parking_lot implementation. When thread B tries to lock a mutex that's currently locked by thread A, then, after spinning a few cycles, thread B "parks" itself, i.e., it asks the kernel to remove it from the Runnable task queue. On Linux, this is done using the futex syscall. When thread A unlocks the mutex, it detects that another thread is waiting on that mutex. Thread A takes one thread from the queue of waiting threads and "unparks" it, i.e., it asks the kernel to move it into the Runnable task queue. The kernel is notified immediately, and if there's a free CPU core available, will tend to dispatch the thread to that core. On a non-realtime OS, there's no guarantee how long it takes for an unblocked thread to be scheduled again, but that's the case for all concurrency primitives.