Remix.run Logo
jacquesm 2 days ago

I did the same, then put in 14 3090's. It's a little bit power hungry but fairly impressive performance wise. The hardest parts are power distribution and riser cards but I found good solutions for both.

r0b05 2 days ago | parent | next [-]

I think 14 3090's are more than a little power hungry!

jacquesm 2 days ago | parent [-]

to the point that I had to pull an extra circuit... but tri phase so good to go even if I would like to go bigger.

I've limited power consumption to what I consider the optimum, each card will draw ~275 Watts (you can very nicely configure this on a per-card basis). The server itself also uses some for the motherboard, the whole rig is powered from 4 1600W supplies, the gpus are divided 5/5/4 and the mother board is connected to its own supply. It's a bit close to the edge for the supplies that have five 3090's on them but so far it held up quite well, even with higher ambient temps.

Interesting tidbit: at 4 lanes/card throughput is barely impacted, 1 or 2 is definitely too low. 8 would be great but the CPUs don't have that many lanes.

I also have a threadripper which should be able to handle that much RAM but at current RAM prices that's not interesting (that server I could populate with RAM that I still had that fit that board, and some more I bought from a refurbisher).

nonplus a day ago | parent [-]

What pcie version are you running? Normally I would not mention one of these, but you have already invested in all the cards, and it could free up some space if any of your lanes being used now are 3.0.

If you can afford the 16 (pcie 3) lanes, you could get a PLX ("PCIe Gen3 PLX Packet switch X16 - x8x8x8x8" on ebay for like $300) and get 4 of your cards up to x8.

jacquesm a day ago | parent [-]

All are PCIe 3.0, I wasn't aware of those switches at all, in spite of buying my risers and cables from that source! Unfortunately all of the slots on the board are x8, there are no x16 slots at all.

So that switch would probably work but I wonder how big the benefit would be: you will probably see effectively an x4 -> (x4 / x8) -> (x8 / x8) -> (x8 / x8) -> (x8 / x4) -> x4 pipeline, and then on to the next set of four boards.

It might run faster on account of the three passes that are are double the speed they are right now as long as the CPU does not need to talk to those cards and all transfers are between layers on adjacent cards (very likely), and with even more luck (due to timing and lack of overlap) it might run the two x4 passes at approaching x8 speeds as well. And then of course you need to do this a couple of times because four cards isn't enough, so you'd need four of those switches.

I have not tried having a single card with fewer lanes in the pipeline but that should be an easy test to see what the effect on throughput of such a constriction would be.

But now you have me wondering to what extent I could bundle 2 x8 into an x16 slot and then to use four of these cards inserted into a fifth! That would be an absolutely unholy assembly but it has the advantage that you will need far fewer risers, just one x16 to x8/x8 run in reverse (which I have no idea if that's even possible but I see no reason right away why it would not work unless there are more driver chips in between the slots and the CPUs, which may be the case for some of the farthest slots).

PCIe is quite amazing in terms of the topology tricks that you can pull off with it, and c-payne's stuff is extremely high quality.

nonplus a day ago | parent [-]

If you end up trying it please share your findings!

I've basically been putting this kind of gear in my cart, and then deciding I dont want to manage more than the 2 3090s, 4090 and a5000 I have now, then I take the PLX out of my cart.

Seeing you have the cards already it could be a good fit!

jacquesm a day ago | parent [-]

Yes, it could be. Unfortunately I'm a bit distracted by both paid work and some more urgent stuff but eventually I will get back to it. By then this whole rig might be hopelessly outdated but we've done some fun experiments with it and have kept our confidential data in-house which was the thing that mattered to me.

r0b05 a day ago | parent [-]

Yes, the privacy is amazing, and there's no rate limiting so you can be as productive as you want. There's also tons of learnings in this exercise. I have just 2x 3090's and I've learnt so much about pcie and hardware that just makes the creative process that more fun.

The next iteration of these tools will likely be more efficient so we should be able to run larger models at a lower cost. For now though, we'll run nvidia-smi and keep an eye on those power figures :)

jacquesm 21 hours ago | parent [-]

You can tune that power down to what gives you the best tokencount per joule, which I think is a very important metric by which to optimize these systems and by which you can compare them as well.

I have a hard time understanding all of these companies that toss their NDA's and client confidentiality into the wind and feed newfangled AI companies their corporate secrets with abandon. You'd think there would be a more prudent approach to this.

tucnak a day ago | parent | prev [-]

You get occasional accounts of 3090 home-superscalers whereas they would put up eight, ten, fourteen cards. I normally attribute this to obsessive-compulsive behaviour. What kind of motherboard you ended up using and what's the bi-directional bandwidth you're seeing? Something tells me you're not using EPYC 9005's with up to 256x PCIe 5.0 lanes per socket or something... Also: I find it hard to believe the "performance" claims, when your rig is pulling 3 kW from the wall (assuming undervolting at 200W per card?) The electricity costs alone would surely make this intractable, i.e. the same as running six washing machines all at once.

jacquesm a day ago | parent [-]

I love your skepsis of what I consider to be a fairly normal project, this is not to brag, simply to document.

And I'm way above 3 kW, more likely 5000 to 5500 with the GPUs running as high as I'll let them, or thereabouts, but I only have one power meter and it maxes out at 2500 watts or so. This is using two Xeons in a very high end but slightly older motherboard. When it runs the space that it is in becomes hot enough that even in the winter I have to use forced air from outside otherwise it will die.

As for electricity costs, I have 50 solar panels and on a good day they more than offset the electricity use, at 2 pm (solar noon here) I'd still be pushing 8 KW extra back into the grid. This obviously does not work out so favorably in the winter.

Building a system like this isn't very hard, it is just a lot of money for a private individual but I can afford it, I think this build is a bit under $10K, so a fraction of what you'd pay for a commercial solution but obviously far less polished and still less performant. But it is a lot of bang for the buck and I'd much rather have this rig at $10K than the first commercial solution available at a multiple of this.

I wrote a bit about power efficiency in the run-up to this build when I only had two GPUs to play with:

https://jacquesmattheij.com/llama-energy-efficiency/

My main issue with the system is that it is physically fragile, I can't transport it at all, you basically have to take it apart and then move the parts and re-assemble it on the other side. It's just too heavy and the power distribution is messy so you end up with a lot of loose wires and power supplies. I could make a complete enclosure for everything but this machine is not running permanently and when I need the space for other things I just take it apart, store the GPUs in their original boxes until the next home-run AI project. Putting it all together is about 2 hours of work. We call it Frankie, on account of how it looks.

edit: one more note, the noise it makes is absolutely incredible and I would not recommend running something like this in your house unless you are (1) crazy or (2) have a separate garage where you can install it.

tucnak 12 hours ago | parent [-]

Thanks for replying, and your power story does make more sense all things considering. I'm no stranger to homelabbing, in fact just now I'm running both IBM POWER9 system (really power-hungry) as well as AMD 8004, both watercooled now while trying to bring the noise down. The whole rack, along with 100G switches and NIC/FPGA's, is certainly keeping us warm in the winter! And it's only dissipating up to 1.6 kW (mostly, thanks to ridiculous efficiency of 8434PN CPU which is like 48 cores at 150W or sommat)

I cannot imagine dissipating 5 kW at home!

jacquesm 3 hours ago | parent [-]

I stick the system in my garage when it is working... I very enthusiastically put it together on the first iteration (with only 8 GPUs) in the living room while the rest of the family was holidaying but that very quickly turned out to be mistake. It has a whole pile of high speed fans mounted in the front and the noise was roughly comparable to sitting in a jet about to take off.

One problem that move caused was that I didn't have a link to the home network in the garage and the files that go to and from that box are pretty large so in the end I strung a UTP cable through a crazy path of little holes everywhere until it reaches the switch in the hallway cupboard. The devil is always in the details...

Running a POWER9 in the house is worthy of a blog post :)

As for Frankie: I fear his days are numbered, I've already been eying more powerful solutions and for the next batch of AI work (most likely large scale video processing and model training) we will probably put something better together, otherwise it will simply take too long.

I almost bought a second hand NVidia fully populated AI workstation but the seller was more than a little bit shady and kept changing the story about how they got it and what they wanted for it. In the end I abandoned that because I didn't feel like being used as a fence for what was looking more and more like stolen property. But buying something like that new is out of the ballpark for me, at 20 to 30% of list I might do it assuming the warranty transfers and that's not a complete fantasy, there are enough research projects that have this kind of gear and sell it off when the project ends.

People joke I don't have a house but a series of connected workshops and that's not that far off the mark :)