| ▲ | DixieDev 4 hours ago | |||||||
This line of thought works for storage in isolation, but does not hold up if write speed is a concern. | ||||||||
| ▲ | 3 hours ago | parent | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | sandworm101 19 minutes ago | parent | prev | next [-] | |||||||
Speed can always be improved. If a method is too slow, run multiple machines in parralel. Longevity is different as it cannot scale. A million cd burners are together very fast, but the CDs wont last any longer. So the storage method is is the more profound tech problem. | ||||||||
| ▲ | cannonpalms 3 hours ago | parent | prev | next [-] | |||||||
So long as (fast/optimal) real-time access to new data is not a concern, you can introduce compaction to solve both problems. | ||||||||
| ||||||||
| ▲ | convolvatron 4 hours ago | parent | prev [-] | |||||||
as a line of thought, it totally does. you just extend the workload description to include writes. where this get problematic is that the ideal structure for transactional writes is nearly pessimal from a read standpoint. which is why we seem to end up doubling the write overhead - once to remember and once to optimize. or highly write-centric approach like LSM I'd love to be clued in on more interesting architectures that either attempt to optimize both or provide a more continuous tuning knob between them | ||||||||