Remix.run Logo
threeseed 6 days ago

> Consistency is hard to achieve and seems to have no use here

Famous last words.

It is very common when operating data platforms like this at this scale to lose a lot of nodes over time especially in the cloud. So having a robust consistency/replication mechanism is vital to making sure your training job doesn't need to be restarted just because the block it needs isn't on the particular node.

ted_dunning 6 days ago | parent | next [-]

Sadly, these are often Famous First words.

What follows is a long period of saying "see, distributed systems are easy for genius developers like me"

The last words are typically "oh shit", shortly followed oxymoronically by "bye! gotta go"

londons_explore 6 days ago | parent | prev [-]

indeed redundancy is fairly important (although the largest part, the training data, actually doesn't matter if chunks are missing).

But the type of consistency they were talking about is strong ordering - the type of thing you might want on a database with lots of people reading and writing tiny bits of data, potentially the same bits of data, and you need to make sure a users writes are rejected if impossible to fulfil, and reads never return an impossible intermediate state. That isn't needed for machine learning.