Remix.run Logo
The early Unix history of chown() being restricted to root(utcc.utoronto.ca)
79 points by kencausey 5 days ago | 25 comments
drfuchs 7 hours ago | parent | next [-]

Not being able to chown() caused us grief developing Frame Maker back in the 80s. The responsible way to handle "save" was to write the document into a new file mydoc.new, then rename mydoc.cur to mydoc.backup and then rename mydoc.new to mydoc.cur, so that failure never left you in the lurch. The only problem was that there was no way to create mydoc.new to have the same owner as mydoc.cur and customers complained that we'd keep changing the owner of their files. If only the semantics of the unix filesystem supported file generation numbers, like on Tops20 or VaxVMS, where the default for writing to a file isn't "yeah, sure, write over top of the old data, and let's hope nothing fails along the way" this would not have been a problem.

webdevver 6 hours ago | parent | next [-]

ive always felt that file systems are by far the weakest point in the entire computing industry as we know it.

something like zfs should have been bog standard, yet its touted as an 'enterprise-grade' filesystem. why is common sense restricted to 'elite' status?

ofcourse i want transparent compression, dedup, copy on write, free snapshots, logical partitions, dynamic resizing, per-user/partition capabilities & qos. i want it now, here, by default, on everything! (just to clarify, ive ever used zfs.)

its so strange when in the compute space you have docker & cgroups, software defined networking, and on the harddrve space i'm dragging boxes in gparted like its the victorian era.

why can't we just... have cool storage stuff? out the box?

toast0 4 hours ago | parent | next [-]

All of those things come with tradeoffs.

Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression.

Dedupe needs indexing to find duplicates and makes writes complex (at least for realtime dedupe). I think online dedupe has pretty limited application, but offline dedupe is interesting.

Copy on write again makes writes complex, and tends to fragmentation of files that are modified. Free snapshots are only free when copy on write is the norm (otherwise, you have to copy on write while a snapshot is open, as on FreeBSD UFS). Copy on write offers a lot, but some applications would suffer.

Dynamic resizing (upwards) is pretty common now. Resize down less so. Zfs downsizing is available, but at least when I tried it, the filesystem became unbootable, so maybe not super useful IMHO.

Logical partitions, per user stuff, qos adds complexity probably not needed for everyone.

SoftTalker 5 hours ago | parent | prev | next [-]

Because the vast majority of personal computer users have no need for the complexity of zfs. That doesn't come for free, and if something goes wrong the average user is going to have no hope of solving it.

FAT, ext4, FFS, are all pretty simple and bulletproof and do everything the typical user needs.

Servers in enterprise settings have higher demands but they can afford an administrator who knows how to manage them and handle problems. In theory.

mixmastamyk 4 hours ago | parent [-]

FAT bulletproof? The newest versions have a few improvements but this is a line of filesystems for disposable sneakernet data.

SoftTalker 2 hours ago | parent [-]

Maybe bulletproof is a bit strong but I mean, it was fine on DOS/Windows for decades. I never lost data due to filesystem corruption on those computers. Media failures, yes frequently in the days of floppy disks.

pessimizer 5 hours ago | parent | prev [-]

Because it was extremely difficult to create something like zfs? And it was proprietary and patent-encumbered, and the permissively licensed versions were buggy until about 5 minutes ago?

That's like saying the Romans should have just used computers.

quesera 3 hours ago | parent | prev | next [-]

> caused us grief developing Frame Maker back in the 80s

To be fair, Frame Maker caused the rest of us a whole lot of grief back then, too. :)

The license manager daemon, lmgrd (?) would crash regularly enough that we just patched the dependency out of our binaries. Sorry about that!

SoftTalker 6 hours ago | parent | prev [-]

I would guess that many early systems just didn't have the storage space for a lot of multiple versions of files. Was VMS saving diffs or full copies of files?

Once storage space was plentiful, the pattern of "overwrite the existing file" was already well established.

kazinator 6 hours ago | parent | prev | next [-]

If you could chown files to an arbitrary other user, you could use that to evade disk quotas.

The protocol for changing ownership should be two step.

1. The file is put into an "offered" state, e.g. "offered to bob". Only the owner or superuser can make this state change.

2. Bob can take an "offered to bob" file and change ownership to bob.

Files can always be in an offered state; i.e. have an offered user which is normaly equal to their owner. So when ownership is taken, the two match again.

heythere22 6 hours ago | parent [-]

What's the deal with disk quotas? Saw that in the OT as well. Why would you measure folder size seperately for each and every user? Would it not be a lot easier to just use the disk space of a folder regardless of whomever the file belongs to?

kazinator 6 hours ago | parent | next [-]

It's not folder size that you measure, but a user's usage: how many blocks are occupied by files belonging to a certain user, no matter where they are.

That's what quotas are: per-user storage limits.

If Bob has a large file which is sitting in Alice's home directory, that counts toward's Bob's quota, not Alice's. If Bob could sneakily change the ownership to Alice, while leaving the permissions open so he could access the file, then the file counts toward Alice's quota.

pwg 3 hours ago | parent | prev | next [-]

Because, in the early days of Unix systems actually being used as multiple, simultaneous, user systems, you might have one group of users collaborating on a project, and they would have a shared directory (via the 'group' owner) where they would store shared items. Each user would create various files, and each file's space consumption was charged to that user, but the shared directory might contain multiple files each owned by different users (but all owned by the shared 'group' identifier, so the group could access them).

For a group shared directory, assigning the disk space usage of files therein to one single user (ignoring the aspect of "which single user do you pick") is unfair to that user (his/her allowed maximum disk space is consumed) while everyone else is not charged for their actual usage.

This all came about to try to enforce rules to prevent one (or a few) rogue users from using up all disk space on the system for themselves, leaving no one else with any disk space available for their own usage.

siebenmann 4 hours ago | parent | prev [-]

One reason why Unix quotas are generally not maintained and imposed by path is that it's a lot easier to update quotas as things are created, deleted, modified, and so on if the only thing that matters for who gets charged is some attribute of the inode, which you always have available. This was especially the case in the 1980s (when UCB added disk quotas), because that was before kernels tracked name to inode associations in RAM the way they generally do today. (But even today things like hardlinks raise questions.)

(I'm the author of the linked-to article.)

ape4 10 hours ago | parent | prev | next [-]

It was one of those restrictions that seemed unjustified to me but I figured someone smarter than I had seen a reason.

rcxdude 6 hours ago | parent | next [-]

It would need at least a little bit of thought with suid binaries.

charcircuit 5 hours ago | parent [-]

Suid binaries were a bad idea and should be removed anyways.

gear54rus 8 hours ago | parent | prev | next [-]

Yeah.. I'm sitting here wondering how many years would it take to remove equally stupid error that says 'private key permissions too open' from ssh-add and friends.

Would save me a wrapper script on my flashdrive that does hacks like loading it from stdin or moving it to temp file.

TZubiri 7 hours ago | parent [-]

It's just a nice security measure.

TZubiri 7 hours ago | parent | prev [-]

Imagine if you wanted to enter a bank safe, but your key doesn't fit the lock. If you were able to change the lock, you would bypass the lock mechanism, rendering it useless

JadeNB 5 hours ago | parent [-]

But imagine if you were the bank-safe owner. Shouldn't you be able to change the lock?

TZubiri 2 hours ago | parent [-]

That would be what root is.

I think a more appropriate question would be, if the key fits, couldn't you change the lock?

Maybe, that would give you 3 abilities.

1 Lock yourself out if you please? Not terrible

2 Provide access to others, which makes sense since you already have access to the file, you could theoretically share it through other channels, you naturally cannot prevent this.

3. Lock others out. This one is less of a security risk and more of a nuisance risk.

I think the unix model is simple, maybe selinux offers more sophistication. That said the unix chown behaviour could have gone either way in terms of security, but in terms of design it makes sense as is.

pessimizer 10 hours ago | parent | prev | next [-]

> Forbidden

> You don't have permission to access /~cks/space/blog/unix/ChownRestrictionEarlyHistory on this server.

I laughed out loud.

https://web.archive.org/web/20251018101005/https://utcc.utor...

TZubiri 7 hours ago | parent | prev [-]

Wait. You can use chown as non root?

emmelaich 3 hours ago | parent [-]

Yes, in SysIII and SysV. Per the article.

It was possible to chown/chgrp as non root in Solaris up to some version that I forget.