| ▲ | md3911027514 3 days ago |
| Regarding the "Durable Queueing Tradeoffs", doesn't Kafka prove you can be both durable and highly performant? |
|
| ▲ | zerotolerance 3 days ago | parent | next [-] |
| Kafka is a wonderful technology that punts on the most difficult part of distributed stream processing and makes it the consumer's problem. |
| |
| ▲ | nosefrog 3 days ago | parent [-] | | What's the most difficult part of distributed stream processing? | | |
| ▲ | zerotolerance 3 days ago | parent [-] | | The most difficult part is managing the delivered / processed state and ordered delivery. Consistent ordering of receipt into a distributed buffer is a great challenge. Most stacks do that pretty well. But deciding when a message has been processed and when you can safely decide not to deliver it again it is especially challenging in a distributed environment. That is sort of danced around a bit in this article where the author is talking about dropped messages, etc. It is tempting to say "use a stream server" but ultimately stream servers make head-of-line accounting the consumer's responsibility. That's usually solved with some kind of (not distributed) lock. |
|
|
|
| ▲ | KraftyOne 3 days ago | parent | prev [-] |
| Kafka is great for streaming use cases, but the big advantage of Postgres-backed queues is that they can integrate with durable workflows, providing durability guarantees for larger programs. For example, a workflow can enqueue many tasks, then wait for them to complete, with fault-tolerance guarantees both for the individual tasks and the larger workflow. |
| |
| ▲ | dbacar 3 days ago | parent | next [-] | | I guess if you use different topics(queues) in Kafka you can do all this by the help of a processor like Storm, Spark etc, routing messages to different topics hence a workflow. | |
| ▲ | chatmasta 3 days ago | parent | prev [-] | | Huh? Kafka messages are durable just like Postgres commits are durable. That’s why it’s used for things like Debezium that need a durable queue of CDC messages like those from the Postgres WAL. There’s nothing inherently different about the durability of Postgres that makes it better than Kafka for implementing durable workflows. There are many reasons it’s a better choice for building a system like DBOS to implement durable workflows – ranging from ergonomics to ecosystem compatibility. But in theory you could build the same solution on Kafka, and if the company were co-founded by the Kafka creators rather than Michael Stonebraker, maybe they would have chosen that. |
|