Remix.run Logo
dekhn 6 hours ago

Single file overheads (opening millions of tiny files whose metadata is not in the OS cache and reading them) appears to be an intrinsic reason (intrinsic to the OS, at least).

PunchyHamster 2 hours ago | parent | next [-]

the majority of that will be big files. And to NVMe it is VERY fast even if you run single threaded 10Gbit should be easy

pixl97 6 hours ago | parent | prev [-]

IOPs and disk read depth are common limits.

Depending on what you're doing it can be faster to leave your files in a solid archive that is less likely to be fragmented and get contiguous reads.