|
| ▲ | PaulDavisThe1st 32 minutes ago | parent | next [-] |
| as noted someone else, it is lock contention that doesn't scale, not mutable shared state. lock-free data structures, patterns like RCU ... in many cases these will scale entirely appropriately to the case at hand. A lot of situations that require high-scale mutable shared state have an inherent asymmetry to the data usage (e.g. one consumer, many writers; many consumers; one writer) that nearly always allow a better pattern than "wrap it in a mutex". |
| |
| ▲ | loeg 15 minutes ago | parent [-] | | No, it's the mutable shared state that is the problem. Lock contention is just downstream of the same problems as any other mutable shared state. > patterns like RCU RCU isn't mutable shared state! It's sharing immutable state! That's the whole paradigm. |
|
|
| ▲ | cogman10 3 hours ago | parent | prev [-] |
| It's lock contention that slows things down more than anything. But it's really an 'it depends' situation. The fastest algorithms will smartly divide up the shared data being operated on in a way that avoids contention. For example, if working on a matrix, then dividing that matrix into tiles that are concurrently processed. |
| |
| ▲ | loeg an hour ago | parent [-] | | > It's lock contention that slows things down more than anything. It's all flavors of the same thing. Lock contention is slow because sharing mutable state between cores is slow. It's all ~MOESI. > The fastest algorithms will smartly divide up the shared data being operated on in a way that avoids contention. For example, if working on a matrix, then dividing that matrix into tiles that are concurrently processed. Yes. Aka shared nothing, or read-only shared state. |
|