Remix.run Logo
vladgur 4 days ago

I am exploring options just for fun.

a used 3090 is around $900 on ebay. a used rtx 6000 ADA is around $5k

4 3090s are slower at inference and worse at training than 1 rtx 6000.

4x3090 would consume 1400W at load.

Rtx 6000 would consume 300W at load.

If you god forbid live in California and your power averages 45 cents per kwh, 4x3090 would be $1500+ more per year to operate than a single RTX 6000[0]

[0] Back of the napkin/ChatGPT calculation of running the GPU at load for 8 hours per day.

Note: I own a pc with a 3090, but if i had to build an AI training workstation, i would seriously consider cost to operate and resale value(per component).

ismailmaj 4 days ago | parent | next [-]

To make matters worse, the RTX3090 was released during the crypto craze and so a decent amount of the second hand market could contain overused GPUs that won’t last long, even if 3xxx to 4xxx performance difference is not that high, I would avoid the 3xxx series totally for resell value.

aunty_helen 4 days ago | parent | next [-]

I bought 2 ex mining 3090s ~3 years ago. They’re in an always on pc that I remote into. Haven’t had a problem. If there was mass failures of gpus due to mining I would expect to have heard more about it

segmondy 4 days ago | parent | prev [-]

I have rig of 7 3090s that I bought from crypto bros, they are lasting quite alright and have been chugging along fine for the last 2 years. GPUs are electronic devices not mechanical devices, they rarely blow up.

akulbe 4 days ago | parent | next [-]

How do you have a rig that fits that many cards?? those things take 3 slots apiece.

Pictures, or it never happened! :D

dehugger 3 days ago | parent [-]

you get a motherboard designed for the purpose (many pcie slots) and a case (usually open frame) that holds that many cards. riser cables are used so every card doesnt plug directly into the motherboard

jonbiggums22 4 days ago | parent | prev [-]

I've noticed on ebay there are a lot of 3090s for sale that seem to have rusted or corroded heatsinks. I actually can't recall seeing this with used GPUs before but maybe I just haven't paying attention. Does this have to do with running them flat out in a basement or something?

dwood_dev 4 days ago | parent [-]

Run near a saltwater source without AC and that will happen.

supermatt 4 days ago | parent | prev | next [-]

I guess it depends on what you want to do: You get half the RAM in the 6000 (48 @ $104/GB) vs 4x3090 (96 @ $37.5/GB).

cfn 4 days ago | parent | prev | next [-]

I have an A6000 and the main advantage over a 3090 cluster is the build simplicity and relative silence of the machine (it is also used as my main dev workstation).

logicallee 4 days ago | parent | prev | next [-]

>I am exploring options just for fun.

Since you're exploring options just for fun, out of curiosity, would you rent it out whenever you're not using it yourself, so it's not just sitting idle? (Could be noisy and loud). You'd be able to use your computer for other work at the same time and stop whenever you wanted to use it yourself.

vladgur 4 days ago | parent [-]

It depends. At my electricity cost, 1 hour of 3090 or 1 hour of Rtx 6000 would cost the same 0.45

Just checked vast.ai. I will be losing money with 3090 at my electricity cost and making a tiny bit with rtx 6000.

Like with boats it’s probably better to rent GPUs then buy them

logicallee 4 days ago | parent | next [-]

(you should also be compensated for the noise and inconvenience from it, not only electricity.) It sounds like you might rent it out if the rental price were higher.

justinclift 4 days ago | parent | prev [-]

Would a solar panel setup be an option for fixing that? :)

segmondy 4 days ago | parent | prev [-]

... and this is why napkin calculation is terrible. Even running a GPU at load doesn't mean you are going to use the full wattage. 4 3090 running inference on large model barely uses 350watts combined.

vladgur 3 days ago | parent [-]

Can you clarify? Even if you down clock the card to 300W, why would running it at load not consume 4x300W?

segmondy a day ago | parent [-]

Inference is often like 200-250w without card clocked down. Then the other cards are like 20w-50w. 4 cards, 1 card is active at once. To get the full 350watt, you need to run parallel inference on the card with multiple users. So if I was using it as a server card and have 10 active users/processes then I might max out the active card. For example, I have a rig with 10 MI50 cards, I believe they are 250w each. Yet I rarely see pass 200w on the active card, they idle at about 20w, so that's 180w + 200w = around 380-400w on full load.

Think of the max watt like a car's max horsepower, a car might make 350HP, it doesn't mean it stays making 350HP all day long, there's a curve to it. At the low end it might be making 170HP and you will need to floor the gas pedal to get to that 350hp. Same with these GPUs. Most people will calculate the gas mileage by finding how much gas a car consumers at it's peak and say, oh, 6mpg when it's making 350hp so with your 20gallon thank, you have a range of 120miles. Which obviously isn't true.