| ▲ | Retr0id 5 hours ago |
| Considered broken by what? |
|
| ▲ | rleigh 5 hours ago | parent [-] |
| Historically, it made deletion rather difficult with some problematic edge-cases. You could unlink a directory and create an orphan cycle that would never be deleted. Combine that with race conditions on a multi-user systems, plus the indeterminate cost of cycle-detection, and it turns out to be a rather complex problem to solve properly, and banning hard-links is a very simple way to keep the problem tractable, and result in fast, robust and reliable filesystem operations. |
| |
| ▲ | Retr0id 4 hours ago | parent [-] | | GP was talking about symlink cycles though, which can't produce orphans during deletion. | | |
| ▲ | rleigh 4 hours ago | parent [-] | | True, I missed that. I suppose with symlinks you have the reverse problem: you can point to deleted filenames and then have broken links. The cycle detection is still an issue though--it has indeterminate complexity and the graph can be modified as you are traversing it! | | |
| ▲ | Retr0id 3 hours ago | parent [-] | | This is true, but just about everyone has a symlink cycle on their system at `/proc/self/root`, and for the most part nobody notices. Having a max recursion depth is usually more useful than actively trying to detect cycles. |
|
|
|