▲ | toast0 13 hours ago | |
All of those things come with tradeoffs. Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression. Dedupe needs indexing to find duplicates and makes writes complex (at least for realtime dedupe). I think online dedupe has pretty limited application, but offline dedupe is interesting. Copy on write again makes writes complex, and tends to fragmentation of files that are modified. Free snapshots are only free when copy on write is the norm (otherwise, you have to copy on write while a snapshot is open, as on FreeBSD UFS). Copy on write offers a lot, but some applications would suffer. Dynamic resizing (upwards) is pretty common now. Resize down less so. Zfs downsizing is available, but at least when I tried it, the filesystem became unbootable, so maybe not super useful IMHO. Logical partitions, per user stuff, qos adds complexity probably not needed for everyone. | ||
▲ | Dylan16807 8 hours ago | parent [-] | |
> Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression. Older systems with worse compute also had worse i/o. There are cases where fast compression slows things down, but they're rare enough to make compression the better default. |