| ▲ | alexjplant 21 hours ago | ||||||||||||||||||||||||||||||||||||||||
Ah yes, and every consumer should just do this in a while (true) loop as producers write to it. Very efficient and simple with no possibility of lock contention or hot spots. Genius, really. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | CharlieDigital 20 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
I've implemented a distributed worker system on top of this paradigm. I used ZMQ to connect nodes and the worker nodes would connect to an indexer/coordinator node that effectively did a `SELECT FROM ORDER BY ASC`. It's easier than you may think and the bits here ended up with probably < 1000 SLOC all told.
Effectively a distributed event loop with the events queued up via a simple SQL query.Dead simple design, extremely robust, very high throughput, very easy to scale workers both horizontally (more nodes) and vertically (more threads). ZMQ made it easy to connect the remote threads to the centralized coordinator. It was effectively "self balancing" because the workers would only re-queue their thread once it finished work. Very easy to manage, but did not have hot failovers since we kept the materialized, "2D" work queue in memory. Though very rarely did we have issues with this. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | antonvs 21 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
It's one of my favorite patterns, because it's the highest-impact, lowest-hanging fruit to fix in many systems that have hit serious scaling bottlenecks. | |||||||||||||||||||||||||||||||||||||||||