Remix.run Logo
oblio 19 hours ago

Yeah, I'm a regular Joe. How do I get one and how much does it cost?

Dylan16807 18 hours ago | parent [-]

If your goal is "a TPU" then you buy a mac or anything labeled Copilot+. You'll need about $600. RAM is likely to be your main limit.

(A mid to high end GPU can get similar or better performance but it's a lot harder to get more RAM.)

haiku2077 17 hours ago | parent | next [-]

$500 if you catch a sale at Costco or Best Buy!

oblio 17 hours ago | parent | prev [-]

I want something I can put in my own PC. GPUs are utterly insane in pricing, since for the good stuff you need at least 16GB but probably a lot more.

Dylan16807 17 hours ago | parent [-]

9060 XT 16GB, $360

5060 Ti 16GB, $450

If you want more than 16GB, that's when it gets bad.

And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.

oblio 3 hours ago | parent [-]

> And you should be able to get two and load half your model into each. It should be about the same speed as if a single card had 32GB.

This seems super duper expensive and not really supported by the more reasonably priced Nvidia cards, though. SLI is deprecated, NVLink isn't available everywhere, etc.

Dylan16807 3 hours ago | parent [-]

No, no, nothing like that.

Every layer of an LLM runs separately and sequentially, and there isn't much data transfer between layers. If you wanted to, you could put each layer on a separate GPU with no real penalty. A single request will only run on one GPU at a time, so it won't go faster than a single GPU with a big RAM upgrade, but it won't go slower either.