Remix.run Logo
enether 3 days ago

Love the story!

> Kafka was one of the first systems that fully embraced the dropping of the C from CAP theorem, which was a big step forward for web applications at scale.

Could you expand on this - when does it drop C? Are you referring to cases where you write to Kafka without waiting for all replicas to acknowledge the write? (acks=1)

And why was it a big step - what other systems didn't embrace dropping the C?

erulabs 3 days ago | parent [-]

Well at the time (and this is still largely true), people were very insistent on ACID compliance in databases. Obviously this made sense of many applications, but became a bottleneck at huge scale. Being able to be eventually consistent became a golden feature. It was worked around by using eg read replicas in production, as SQL replication breaks the Correctness in favor of the Availability. Kafka’s “acks=1” is part of the story yes, but simply writing events to be eventually processed also accomplishes “dropping correctness”.

Native support for dropping correctness in SQL is tricky, see Transaction Isolation Levels, but I mostly mean in overall web architecture, rather than specifically in one database or the other.