▲ | kllrnohj 3 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop. A lot of that group is making use of the IO capabilities of these systems to run lots of PCI-E devices & hard drives. There's not exactly a cost-effective modern equivalent for that. If there were cost-effective ways to do something like take a PCI-E 5.0 x2 and turn it into a PCI-E 3.0 x8 that'd be incredible, but there isn't really. So raw PCI-E lane count is significant if you want cheap networking gear or HBAs or whatever, and raw PCI-E lane count is $$$$ if you're buying new. Also these old systems mean cheap RAM in large, large capacities. Like 128GB RAM to make ZFS or VMs purr is much cheaper to do on these used systems than anything modern. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | mattbillenstein 3 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Perhaps, but I don't really get the dozens of TB of storage in the home use case a lot of the time either. Like if you have a large media library, you need to push maybe 10MB/s, you don't need 128GB of RAM to do that... It's mostly just hardware porn - perhaps there are a few legit use cases for the old hardware, but they are exceedingly rare in my estimate. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|