| ▲ | drfuchs 17 hours ago |
| Not being able to chown() caused us grief developing Frame Maker back in the 80s. The responsible way to handle "save" was to write the document into a new file mydoc.new, then rename mydoc.cur to mydoc.backup and then rename mydoc.new to mydoc.cur, so that failure never left you in the lurch. The only problem was that there was no way to create mydoc.new to have the same owner as mydoc.cur and customers complained that we'd keep changing the owner of their files. If only the semantics of the unix filesystem supported file generation numbers, like on Tops20 or VaxVMS, where the default for writing to a file isn't "yeah, sure, write over top of the old data, and let's hope nothing fails along the way" this would not have been a problem. |
|
| ▲ | quesera 13 hours ago | parent | next [-] |
| > caused us grief developing Frame Maker back in the 80s To be fair, Frame Maker caused the rest of us a whole lot of grief back then, too. :) The license manager daemon, lmgrd (?) would crash regularly enough that we just patched the dependency out of our binaries. Sorry about that! |
|
| ▲ | webdevver 15 hours ago | parent | prev | next [-] |
| ive always felt that file systems are by far the weakest point in the entire computing industry as we know it. something like zfs should have been bog standard, yet its touted as an 'enterprise-grade' filesystem. why is common sense restricted to 'elite' status? ofcourse i want transparent compression, dedup, copy on write, free snapshots, logical partitions, dynamic resizing, per-user/partition capabilities & qos. i want it now, here, by default, on everything! (just to clarify, ive ever used zfs.) its so strange when in the compute space you have docker & cgroups, software defined networking, and on the harddrve space i'm dragging boxes in gparted like its the victorian era. why can't we just... have cool storage stuff? out the box? |
| |
| ▲ | toast0 13 hours ago | parent | next [-] | | All of those things come with tradeoffs. Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression. Dedupe needs indexing to find duplicates and makes writes complex (at least for realtime dedupe). I think online dedupe has pretty limited application, but offline dedupe is interesting. Copy on write again makes writes complex, and tends to fragmentation of files that are modified. Free snapshots are only free when copy on write is the norm (otherwise, you have to copy on write while a snapshot is open, as on FreeBSD UFS). Copy on write offers a lot, but some applications would suffer. Dynamic resizing (upwards) is pretty common now. Resize down less so. Zfs downsizing is available, but at least when I tried it, the filesystem became unbootable, so maybe not super useful IMHO. Logical partitions, per user stuff, qos adds complexity probably not needed for everyone. | | |
| ▲ | Dylan16807 8 hours ago | parent [-] | | > Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression. Older systems with worse compute also had worse i/o. There are cases where fast compression slows things down, but they're rare enough to make compression the better default. |
| |
| ▲ | SoftTalker 14 hours ago | parent | prev | next [-] | | Because the vast majority of personal computer users have no need for the complexity of zfs. That doesn't come for free, and if something goes wrong the average user is going to have no hope of solving it. FAT, ext4, FFS, are all pretty simple and bulletproof and do everything the typical user needs. Servers in enterprise settings have higher demands but they can afford an administrator who knows how to manage them and handle problems. In theory. | | |
| ▲ | mixmastamyk 13 hours ago | parent [-] | | FAT bulletproof? The newest versions have a few improvements but this is a line of filesystems for disposable sneakernet data. | | |
| ▲ | SoftTalker 11 hours ago | parent [-] | | Maybe bulletproof is a bit strong but I mean, it was fine on DOS/Windows for decades. I never lost data due to filesystem corruption on those computers. Media failures, yes frequently in the days of floppy disks. | | |
| ▲ | pjmlp an hour ago | parent [-] | | I had a HD fail on me while using Windows 98 as main OS, yet thanks to ext, I think it was ext2 at the time this happened, I still managed to repurpose it for Linux, for several months. It was ok from possible data failures point of view, I didn't had much data other than the distro and the stuff I also needed to compile under Linux. Somehow it managed to still work with the disk, with the sectors that were not damaged. |
|
|
| |
| ▲ | pessimizer 14 hours ago | parent | prev [-] | | Because it was extremely difficult to create something like zfs? And it was proprietary and patent-encumbered, and the permissively licensed versions were buggy until about 5 minutes ago? That's like saying the Romans should have just used computers. |
|
|
| ▲ | SoftTalker 15 hours ago | parent | prev [-] |
| I would guess that many early systems just didn't have the storage space for a lot of multiple versions of files. Was VMS saving diffs or full copies of files? Once storage space was plentiful, the pattern of "overwrite the existing file" was already well established. |
| |
| ▲ | drfuchs 9 hours ago | parent [-] | | Typical TOPS-20 and VMS hardware of the time would have less than a gigabyte of spinning disk space, to be shared among many dozens of users. Full copies of files were saved, and there were strict per-user disk allotments. Creating Generation 2 of a file would mark the Generation 1 version as deleted. When you ran out of allotment during execution, the OS would pause your program and give you the chance to issue an Expunge command to really recycle all (or a subset) of the deleted files, and then you'd just Continue the paused process. Similar to desktop "Trash" folders where deleted things go, and that you may have to Empty once in a while. |
|