▲ | yjftsjthsd-h 4 days ago | |||||||||||||
> Compression seems silly in the modern world. Virtually everything is already compressed. IIRC my laptop's zpool has a 1.2x compression ratio; it's worth doing. At a previous job, we had over a petabyte of postgres on ZFS and saved real money with compression. Hilariously, on some servers we also improved performance because ZFS could decompress reads faster than the disk could read. | ||||||||||||||
▲ | adzm 4 days ago | parent | next [-] | |||||||||||||
> we also improved performance because ZFS could decompress reads faster than the disk could read This is my favorite side effect of compression in the right scenarios. I remember getting a huge speed up in a proprietary in-memory data structure by using LZO (or one of those fast algorithms) which outperformed memcpy, and this was already in memory so no disk io involved! And used less than a third of the memory. | ||||||||||||||
▲ | bionsystem 3 days ago | parent | prev | next [-] | |||||||||||||
The performance gain from compression (replacing IO with compute) is not ironic, it was seen as a feature for the various NAS that Sun (and after them Oracle) developped around ZFS. | ||||||||||||||
▲ | pezezin 4 days ago | parent | prev [-] | |||||||||||||
How do you get a PostgreSQL database to grow to one petabyte? The maximum table size is 32 TB o_O | ||||||||||||||
|