Remix.run Logo
unshavedyak 2 hours ago

I am eagerly awaiting being able to run a strong local model. I'd hand Apple $5k right now for a Claude in a box. I know the cost might not be there now, just saying that is around my ideal price point.

$10k might even be worth it - but i'm assuming that the more expensive it is the beefier it is too, which also means more electricity.. and i already run ~6 computers/servers in my house. If a power surge happens i'm going to go live in the woods lol.

atonse 2 hours ago | parent | next [-]

I would do the same but my issue is that the models are changing so fast, so I don't want to be left out of the next model cuz it only runs on an even newer GPU or something like that.

But maybe my limited understanding is thinking of this wrong.

JamesLeonis 36 minutes ago | parent [-]

I wouldn't worry about hardware.

I've run the latest local models over the last year, including the recent Qwen 3.6 30B A3B, on a 9yo GTX 1080 and 32G RAM I have lying around[0]. If I can do that I don't think hardware will be a problem for you in the near term. The only updates I've needed were to Llama.cpp when a new class of model was released.

[0]: In my case, I want to see how local models perform on limited hardware, sacrificing context size and intelligence compared to SOTA models, so I have to really limit my expectations.

DANmode 2 hours ago | parent | prev | next [-]

You can run 6-12 month old state of the art models for that type of money,

like, yesterday.

templar_snow 44 minutes ago | parent | prev [-]

Uh... get a UPS?