Remix.run Logo
Melatonic 4 days ago

CPUs themselves only have so many PCI-E lanes though right? Wouldnt it make sense (even for consumers) to have peripherals using less lanes (but more speed per lane) for a multi GPU system or something that uses a lot of drives?

zamadatix 4 days ago | parent [-]

More lanes = more cost

Faster lanes = more cost

More faster lanes = lots more cost

The chipset also strikes some of the balance for consumers though. It has a narrow high speed connection to the CPU but enables many lower speed devices to share that bandwidth. That way you can have your spare NVMe drive, SATA controller, wired and wireless NICs, sound hardware, most of your USB ports, your capture card, and some other random things connected over a single x4 to x8 sized channel. This leaves the high cost lanes for just the devices that actually use them (GPU and primary, possibly secondary, storage drive). I've got one consumer type Motherboard with 14 NVMe drives connected, for example, just not at full native speed directly to the CPU.

You're just SoL if you want to connect a bunch of really high bandwidth devices simultaneously (100 Gbps+ NICs, multiple GPUs at full connection speed, a dozen NVMe drives at native speed, or similar) because then you'll be paying for a workstation/server class platform which did make the "more faster lanes" tradeoff (plus some market segment gouging, of course).

vladvasiliu 4 days ago | parent [-]

One issue is that, at least on cheaper mobos, these don't work as a "total bandwidth budget" situation. And, especially with newer generation PCIe, it can be a bit frustrating.

Many mobos will operate the available slots such that the total number of active lanes is split between them. But if you use older-generation cards, you'll only get a fraction of the available bandwidth because you're only using a fraction of their lanes, although the physical lanes are physically present.

What I'm thinking about is something like, say, a pair of Gen3 NVMe drives that are good enough for mass storage (running in RAID-1 for good measure) and some cheap used 10 Gb NIC, which will probably be 8x Gen2, all running on a Gen4+ capable mobo.

And, while for a general-purpose setup I can live with splitting available BW between the NIC and the GPU (I most likely don't care about my download going super fast while I game), the downloads will generally go to the storage, so they must be fast at the same time.

zamadatix 3 days ago | parent [-]

Those MBs are cheaper precisely because supporting this kind of bandwidth breakout adds cost (-> a fancier PCIe switch in a higher end chipset/southbridge). If you add the support to do this to them you end up with the more expensive motherboard. Some of the highest end motherboards actually have 2 chipsets/PCIe switches, more cost but a more bandwidth sharing for the same number of lanes coming from the CPU.

You can also buy external PCIe switches (just make sure you're not accidentally buying a PCIe bifurcation device). Most of the time it's cheaper to just buy the higher end motherboard though, e.g. I don't want to know what price "Request a quote" for this PCIe switch which can do x8 4.0 upstream and then quad 4x 3.0 downstream https://www.amfeltec.com/pci-express-gen-4-carrier-board-for... is. I do have a few 3.0 era cards which were more reasonably priced though https://www.aliexpress.us/item/3256801702762036.html?gateway... and they've worked well for me.

vladvasiliu 3 days ago | parent [-]

How high-end are we talking about? Do you know off-hand of any models supporting this?

I haven't seen such features on boards under 200 EUR, from Asus, Asrock and Gigabyte.

The thing is, if I have to splurge for some 400 EUR "gaming" model, I might as well move to a "workstation" CPU supporting more lanes out of the box, and the mobo will be priced roughly the same.