▲ | djoldman 3 days ago | |||||||||||||||||||||||||
> In 2010, LinkedIn had 90 million members. Today, we serve over 1.2 billion members on LinkedIn. Unsurprisingly, this increase has created some challenges over the years, making it difficult to keep up with the rapid growth in the number, volume, and complexity of Kafka use cases. Supporting these use-cases meant running Kafka at a scale of over 32T records/day at 17 PB/day on 400K topics distributed across 10K+ machines within 150 clusters. https://www.linkedin.com/blog/engineering/infrastructure/int... | ||||||||||||||||||||||||||
▲ | enether 3 days ago | parent [-] | |||||||||||||||||||||||||
~197 GB/s ... nice. I believe these companies save literally every ounce of data they can find. Once you have the infra and teams for it, it seems easy to make a case for storing something. Similarly, Uber has shared they push 89 GB/s through Kafka - 7.7 PB/s. People always ask me - what is a taxi/food-delivery app storing so much | ||||||||||||||||||||||||||
|