Remix.run Logo
alexjplant 21 hours ago

Ah yes, and every consumer should just do this in a while (true) loop as producers write to it. Very efficient and simple with no possibility of lock contention or hot spots. Genius, really.

CharlieDigital 20 hours ago | parent | next [-]

I've implemented a distributed worker system on top of this paradigm.

I used ZMQ to connect nodes and the worker nodes would connect to an indexer/coordinator node that effectively did a `SELECT FROM ORDER BY ASC`.

It's easier than you may think and the bits here ended up with probably < 1000 SLOC all told.

    - Coordinator node ingests from a SQL table
    - There is a discriminator key for each row in the table for ordering by stacking into an in-memory list-of-lists
    - Worker nodes are started with _n_ threads
    - Each thread sends a "ready" message to the coordinator and coordinator replies with a "work" message
    - On each cycle, the coordinator advances the pointer on the list, locks the list, and marks the first item in the child list as "pending"
    - When worker thread finishes, it sends a "completed" message to the coordinator and coordinator replies with another "work" message
    - Coordinator unlocks the list the work item originated from and dequeues the finished item.
    - When it reaches the end of the list, it cycles to the beginning of the list and starts over, skipping over any child lists marked as locked (has a pending work item)
Effectively a distributed event loop with the events queued up via a simple SQL query.

Dead simple design, extremely robust, very high throughput, very easy to scale workers both horizontally (more nodes) and vertically (more threads). ZMQ made it easy to connect the remote threads to the centralized coordinator. It was effectively "self balancing" because the workers would only re-queue their thread once it finished work. Very easy to manage, but did not have hot failovers since we kept the materialized, "2D" work queue in memory. Though very rarely did we have issues with this.

ahoka 19 hours ago | parent | next [-]

Yeah, but that's like doing actual engineering. Instead you should just point to Kafka and say that it's going to make your horrible architecture scale magically. That's how the pros do it.

tormeh 18 hours ago | parent [-]

Kafka isn't magic, but it's close. If a single-node solution like an SQL database can handle your load then why shouldn't you stick with SQL? Kafka is not for you. Kafka is for workloads that would DDoS Postgres.

kerblang 13 hours ago | parent | prev [-]

Kafka is really not intended to improve on this. Instead, it's intended for very high-volume ETL processing, where a classical message queue delivering records would spend too much time on locking. Kafka is hot-rodding the message queue design and removing guard rails to get more messages thru faster.

Generally I say, "Message queues are for tasks, Kafka is for data." But in the latter case, if your data volume is not huge, a message queue for async ETL will do just fine and give better guarantees as FIFO goes.

In essence, Kafka is a very specialized version of much more general-purpose message queues, which should be your default starting point. It's similar to replacing a SQL RDBMS with some kind of special NoSQL system - if you need it, okay, but otherwise the general-purpose default is usually the better option.

CharlieDigital 13 hours ago | parent [-]

Of course this is not the same as Kafka, but the comment I'm replying to:

    > Ah yes, and every consumer should just do this in a while (true) loop as producers write to it. Very efficient and simple with no possibility of lock contention or hot spots. Genius, really.
Seemed to imply that it's not possible to build a high performance pub/sub system using a simple SQL select. I do not think that is true and it is in fact fairly easy to build a high performance pub/sub system with a simple SQL select. Clearly, this design as proposed is not the same as Kafka.
alexjplant 12 hours ago | parent [-]

No, I implied that implementing pub/sub with just a select statement is silly because it is. Your implementation accounts for the downfalls of this approach with smart design using a message queue and intelligent locking semantics. Parent of my comment was glib and included none of this.

antonvs 21 hours ago | parent | prev [-]

It's one of my favorite patterns, because it's the highest-impact, lowest-hanging fruit to fix in many systems that have hit serious scaling bottlenecks.