| ▲ | asdfasgasdgasdg 5 days ago | ||||||||||||||||
10^-7 (loss/record) * 10^8 (record/year) yields 10 data losses per year. If you're even a medium sized business you need a much better than 10^-7 probability of losses. | |||||||||||||||||
| ▲ | Dylan16807 5 days ago | parent | next [-] | ||||||||||||||||
That's only true if your typical loss event loses one record. If you have a one in a million chance of an array failure taking out 10% of your production database, and otherwise have zero possibility of data loss, you also get 10^-7 losses per record. And I wouldn't assume they meant that number to be per record in the first place. | |||||||||||||||||
| |||||||||||||||||
| ▲ | klodolph 4 days ago | parent | prev [-] | ||||||||||||||||
The half-remembered storage system I pulled those numbers from had records ~100G in size, so a 10^-7 loss is 1 loss event per year, per exabyte of data. A loss event is just “at least one bit in the record cannot be read within a certain deadline”. Durability is a knob. If you have enough data, or turn the knob too far in the direction of durability, you will simply bankrupt yourself or maybe drown your service in latency. It makes sense that you would have storage services that provide different levels of durability. | |||||||||||||||||