Remix.run Logo
CamperBob2 10 hours ago

It cost 20-30k a month to run Kimi 2.6. The tokens are sold for $3 per mm.

Not if you're OK with 4-bit quantization. More like $30K-$50K one time.

Spring for 8 RTX6000s instead of 4, and you can use the full-precision K2.6 weights ( https://github.com/local-inference-lab/rtx6kpro/blob/master/... ).

reissbaker 10 hours ago | parent | next [-]

RTX 6000 Pro retails for $10k so an 8x is $80k before anything else in the computer, and long-context will have... pretty bad performance (20+ seconds of waiting before any tokens come out), but it's true it technically works.

I don't think cloud models are going away; the hardware for good perf is expensive and higher param count models will remain smarter for a looong time. Even if the hardware cost for kind-of-usable perf fell to only $10k, cloud ones will be way faster and you'd need a lot of tokens to break even.

zozbot234 10 hours ago | parent | next [-]

> I don't think cloud models are going away; the hardware for good perf is expensive

I think local AI will win in its niche by repurposing users' existing hardware, especially as cloud hardware itself gets increasingly bottlenecked in all sorts of ways and the price of cloud tokens rises. You don't have to care about "bad" performance when you've got dedicated hardware that runs your workloads 24/7. Time-critical work that also requires the latest and greatest model can stay on the cloud, but a vast amount of AI work just isn't that critical.

reissbaker 6 hours ago | parent | next [-]

Users do not have an existing $80k of hardware, are not going to buy $80k of hardware for worse performance than paying $100/month, and models are continuing to grow in size while memory grows in price.

zozbot234 43 minutes ago | parent | next [-]

You said you need $80k in hardware for "good performance". I'm saying the local AI inference workflow will be a lot more flexible about performance than that, and can get away with something vastly cheaper and in line with what the user owns already.

otabdeveloper4 3 hours ago | parent | prev [-]

> paying $100/month

There will not ever be a monthly subscription for LLM tokens. The economics isn't there.

Local tokens will always be cheaper.

ai_fry_ur_brain 9 hours ago | parent | prev [-]

"I think"

Well your thinking is completely vibes based and not cemented in any reality I exist in.

CamperBob2 7 hours ago | parent [-]

Other sites beckon.

otabdeveloper4 3 hours ago | parent | prev | next [-]

> higher param count models will remain smarter for a looong time

They're not smarter, they just know more stuff.

You probably don't need knowledge about Pokemon or the Diamond Sutra in your enterprise coding LLM.

The "smarts" comes from post-training, especially around tool use.

anon7725 3 hours ago | parent [-]

If the smarts came from post-training, we could show significant gains by doing that post-training again for previous generations of models. But we know that isn’t happening - effective post training is necessary but not sufficient for model performance.

alfiedotwtf 2 hours ago | parent | prev [-]

If 8 x RTX 6000 is getting you 20s before initial token, how are cloud vendors doing this?

zozbot234 10 hours ago | parent | prev [-]

4-bit quantization is native for Kimi 2.x series.

CamperBob2 10 hours ago | parent [-]

You're right, I was thinking of Qwen. K2.6 will run at UD-Q2_K_XL precision on 4x RTX6000 boards, but I have no idea if it's worthwhile.