|
| ▲ | Dylan16807 8 hours ago | parent | next [-] |
| Piles of small files are unpleasant to deal with. Going over millions of files even without touching the contents gets annoying. Trying to back up or move big directories gets worse. If you have a hard drive involved it really gets bad, it can probably seek 10 million times in an entire day. |
|
| ▲ | adgjlsfhk1 11 hours ago | parent | prev [-] |
| Files in nested folders are primarily an abstraction for humans. They are a maximally flexible and customizable system. This has substantial costs (especially in environments with parallel work). As such, no one really has millions of pieces of fully separate, unstructured, hierarchical data. Once you have that much data, there is almost always additional structure that would be better represented in something like a database where you can actually express the invariants that you have. |
| |
| ▲ | hexo 2 hours ago | parent | next [-] | | Filesystem is essentially a "simple" database. If it is not performing, then it is not a good db. It shouldn't really matter how many files you have if metadata, and indexing of that metadata is done properly (i.e. like in good db). It also has additional benefits to DB that usually do not even exist there as they aren't practical at all (like random access). | |
| ▲ | pitched 10 hours ago | parent | prev [-] | | Aren’t block sizes (and minimum file size) normally around 4kB? So a max number of 1-byte files would take up around 16 TB, without adding any overhead. Those drives are available these days | | |
| ▲ | adgjlsfhk1 10 hours ago | parent | next [-] | | Many file systems support sub-block allocation | |
| ▲ | mastax 10 hours ago | parent | prev [-] | | Nobody wants to store 2^32 1 Byte files and if you do you can make your own file system, frankly. |
|
|