Remix.run Logo
webdevver 15 hours ago

ive always felt that file systems are by far the weakest point in the entire computing industry as we know it.

something like zfs should have been bog standard, yet its touted as an 'enterprise-grade' filesystem. why is common sense restricted to 'elite' status?

ofcourse i want transparent compression, dedup, copy on write, free snapshots, logical partitions, dynamic resizing, per-user/partition capabilities & qos. i want it now, here, by default, on everything! (just to clarify, ive ever used zfs.)

its so strange when in the compute space you have docker & cgroups, software defined networking, and on the harddrve space i'm dragging boxes in gparted like its the victorian era.

why can't we just... have cool storage stuff? out the box?

toast0 13 hours ago | parent | next [-]

All of those things come with tradeoffs.

Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression.

Dedupe needs indexing to find duplicates and makes writes complex (at least for realtime dedupe). I think online dedupe has pretty limited application, but offline dedupe is interesting.

Copy on write again makes writes complex, and tends to fragmentation of files that are modified. Free snapshots are only free when copy on write is the norm (otherwise, you have to copy on write while a snapshot is open, as on FreeBSD UFS). Copy on write offers a lot, but some applications would suffer.

Dynamic resizing (upwards) is pretty common now. Resize down less so. Zfs downsizing is available, but at least when I tried it, the filesystem became unbootable, so maybe not super useful IMHO.

Logical partitions, per user stuff, qos adds complexity probably not needed for everyone.

Dylan16807 8 hours ago | parent [-]

> Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression.

Older systems with worse compute also had worse i/o. There are cases where fast compression slows things down, but they're rare enough to make compression the better default.

SoftTalker 14 hours ago | parent | prev | next [-]

Because the vast majority of personal computer users have no need for the complexity of zfs. That doesn't come for free, and if something goes wrong the average user is going to have no hope of solving it.

FAT, ext4, FFS, are all pretty simple and bulletproof and do everything the typical user needs.

Servers in enterprise settings have higher demands but they can afford an administrator who knows how to manage them and handle problems. In theory.

mixmastamyk 13 hours ago | parent [-]

FAT bulletproof? The newest versions have a few improvements but this is a line of filesystems for disposable sneakernet data.

SoftTalker 11 hours ago | parent [-]

Maybe bulletproof is a bit strong but I mean, it was fine on DOS/Windows for decades. I never lost data due to filesystem corruption on those computers. Media failures, yes frequently in the days of floppy disks.

pjmlp an hour ago | parent [-]

I had a HD fail on me while using Windows 98 as main OS, yet thanks to ext, I think it was ext2 at the time this happened, I still managed to repurpose it for Linux, for several months.

It was ok from possible data failures point of view, I didn't had much data other than the distro and the stuff I also needed to compile under Linux.

Somehow it managed to still work with the disk, with the sectors that were not damaged.

pessimizer 14 hours ago | parent | prev [-]

Because it was extremely difficult to create something like zfs? And it was proprietary and patent-encumbered, and the permissively licensed versions were buggy until about 5 minutes ago?

That's like saying the Romans should have just used computers.