▲ | Melatonic 4 days ago | |||||||||||||||||||||||||
CPUs themselves only have so many PCI-E lanes though right? Wouldnt it make sense (even for consumers) to have peripherals using less lanes (but more speed per lane) for a multi GPU system or something that uses a lot of drives? | ||||||||||||||||||||||||||
▲ | zamadatix 4 days ago | parent [-] | |||||||||||||||||||||||||
More lanes = more cost Faster lanes = more cost More faster lanes = lots more cost The chipset also strikes some of the balance for consumers though. It has a narrow high speed connection to the CPU but enables many lower speed devices to share that bandwidth. That way you can have your spare NVMe drive, SATA controller, wired and wireless NICs, sound hardware, most of your USB ports, your capture card, and some other random things connected over a single x4 to x8 sized channel. This leaves the high cost lanes for just the devices that actually use them (GPU and primary, possibly secondary, storage drive). I've got one consumer type Motherboard with 14 NVMe drives connected, for example, just not at full native speed directly to the CPU. You're just SoL if you want to connect a bunch of really high bandwidth devices simultaneously (100 Gbps+ NICs, multiple GPUs at full connection speed, a dozen NVMe drives at native speed, or similar) because then you'll be paying for a workstation/server class platform which did make the "more faster lanes" tradeoff (plus some market segment gouging, of course). | ||||||||||||||||||||||||||
|