| ▲ | loeg 3 hours ago | |||||||
Idk, it's a general rule of thumb that the more mutable shared state an algorithm has, the worse it scales. So if you're trying to scale something to be concurrent, mutable shared state is an antipattern. | ||||||||
| ▲ | PaulDavisThe1st 32 minutes ago | parent | next [-] | |||||||
as noted someone else, it is lock contention that doesn't scale, not mutable shared state. lock-free data structures, patterns like RCU ... in many cases these will scale entirely appropriately to the case at hand. A lot of situations that require high-scale mutable shared state have an inherent asymmetry to the data usage (e.g. one consumer, many writers; many consumers; one writer) that nearly always allow a better pattern than "wrap it in a mutex". | ||||||||
| ||||||||
| ▲ | cogman10 3 hours ago | parent | prev [-] | |||||||
It's lock contention that slows things down more than anything. But it's really an 'it depends' situation. The fastest algorithms will smartly divide up the shared data being operated on in a way that avoids contention. For example, if working on a matrix, then dividing that matrix into tiles that are concurrently processed. | ||||||||
| ||||||||