Remix.run Logo
crazygringo 3 hours ago

> within a few years we will be running local models as good as today’s frontier models with almost no cost burden

Based on what? The RAM requirements alone are extraordinary.

No, running large models on shared, dedicated hosted hardware at full utilization is going to be vastly more cost-efficient for the foreseeable future.

crystal_revenge an hour ago | parent | next [-]

> Based on what?

I take it you haven’t actually run any of the current gen local models?

They all fit on fairly accessibility hardware, and their performance is at least on par with what I was paying for last year.

I have one of my agents running entirely from a local model running on a MBP and it has repeatedly shown it’s capable of non-trivial tasks.

Playing around with another, uncensored, local model on my 4090 desktop has me finally thinking about canceling my personal Anthropic subscription. Fully private, uncensored chat is a game changer.

For work it’s still all private models but largely because, at this stage, it’s worth paying a premium just to be sure you’re using the best and it saves the time of managing out own physical servers. But if we got news tomorrow that Anthropic and OpenAI were shutting down, a reasonable setup could be figured out pretty quickly.

Leynos 41 minutes ago | parent [-]

What kind of useful context window are you getting on a 4090, out of curiosity?

crystal_revenge 23 minutes ago | parent [-]

256k tokens for both the MBP and the 4090

alsetmusic 3 hours ago | parent | prev | next [-]

Local modals are 6 months to 18 months behind frontier. Even if the performance of a cloud model is faster, it's clear that local is catching up.

alecco 2 hours ago | parent | next [-]

> Local modals are 6 months to 18 months behind frontier.

I wish this was true but it is not. And I am working on open source models so if anything, I would have a bias towards agreeing with you.

Frontier closed models (GPT/Claude) are gaining distance to everybody else. Even Google, once the king.

Your claim is a meme coming from benchmark results and sadly a lot of models are benchmaxxed. Llama 4, and most notably the Grok 3 drama with a lot of layoffs. And Chinese big tech... well they have some cultural issues.

"Qwen's base models live in a very exam-heavy basin - distinct from other base models like llama/gemma. Shown below are the embeddings from randomly sampled rollouts from ambiguous initial words like "The" and "A":"

https://xcancel.com/N8Programs/status/2044408755790508113

---

But thank god at least we have DeepSeek. They keep releasing good models in spite of being so seriously resource constrained. Punching well above their weight. But they are not just 6 months behind, either.

crystal_revenge 39 minutes ago | parent | next [-]

I’ve worked, for a long time professionally, in the open model space for 3 years and up to 2 months ago I would have agreed with you. But it’s empirically not the case today. These models (combined with a good harness) have dramatically improved in both power and performance.

Gemma 4 was a major improvement is self-hostable local models and Qwen-3.6-A34B is a beast, and runs great on an MBP (and insanely well on a 4090).

The biggest lift is combining these models with a good agent harness (personally prefer Hermes agent). But I’ve found in practice they’re really not benchmaxxing. I’ve had these agents successfully hand a few non-trivial research projects that I wouldn’t have been able to accomplish as successfully even last year.

When you add in the open-but-not local models, Kimi, GLM, Minimax, you have a lot of very nice options. For personal use anything I don’t use local models for I give to my Kimi 2.6 powered agent.

dools 2 hours ago | parent | prev | next [-]

Kimi k2.6 is about on par with GPT 5.2 so I’d say open weight models are about 6 months behind.

cbg0 2 hours ago | parent | next [-]

The Q4 quantization requires about 600GB of RAM without context, not exactly consumer hardware friendly.

janderland 2 hours ago | parent | prev [-]

Has Kimi found a way to vastly reduce the amount of VRAM required without running at 3 tokens per second? That’s the real concern.

tyre 2 hours ago | parent | prev [-]

The Chinese models should stay close on a lag. They’re doing a ton of distillation that, realistically, I’m not sure the American frontiers can stop.

alecco an hour ago | parent [-]

US labs got tough on "adversarial" distillation [1]. I suspect that's one of several reasons why Chinese big labs are lagging again.

[0] US AI firms team up in bid to counter Chinese 'distillation' (Apr 7) https://finance.yahoo.com/sectors/technology/articles/us-ai-...

__s 3 hours ago | parent | prev | next [-]

You still need the hardware

I've got a 128GB strix halo staying warm at home, it has nothing on top models with big budget. It's good supplement to low end plans for offloading grunt work / initial triage

manmal 3 hours ago | parent [-]

Have you looked into DwarfStar 4?

__s 2 hours ago | parent [-]

Been away from home for nearly a month, so was mostly going off Qwen 3.5 122b-a10b (Q4?) / Qwen 3.6 35b-a3b (Q8) / Gemma4 31b (Q8)

Thanks for suggestion tho, tool by antirez is always going to pique interest, I'll check it out when I'm finally home again

Tho says Metal / CUDA, so doesn't seem friendly to Linux AMD system

manmal 41 minutes ago | parent [-]

His quant that fits into 128GB looks interesting for Spark DGX as well IMO.

greesil 3 hours ago | parent | prev | next [-]

How do you know this? I'm not trying to attack your statement, I am genuinely curious how anyone knows anything about model performance outside of benchmarks that are already in the training set.

scragz 3 hours ago | parent [-]

using them you kind of get a feeling for skill level and can extrapolate that better than juiced benchmarks.

lukeschlather 3 hours ago | parent | prev | next [-]

It is not getting easier to obtain hardware that can run models which are sufficiently useful to undercut frontier models, if anything the cost of such hardware has gone up by 25% or more just in the past 6 months.

aleqs 2 hours ago | parent [-]

I think hardware prices will come back down once we start seeing more efficiency improvements in models and hardware, and once more people and companies self-host models (which seems to be happening more and more these days). I think the massive infra/hardware expenditures of OpenAI and the like are going to end up unnecessary, leading to hardware price drops.

t-sauer an hour ago | parent [-]

If companies decide to self-host, wouldn't that drive the demand and therefore prices up? Most companies currently do not have the needed infrastructure.

aleqs 29 minutes ago | parent [-]

I think companies will self host (including on rented hardware) even if it's more expensive, and that, along with efficiency improvements, will drop demand for big AI. I think big AI is overspending on hardware/datacenters at the moment.

calvinmorrison 3 hours ago | parent | prev [-]

if that's true - and in 6 or 12 months i can get what i have today, it might not be worth paying anthropic.

nine_k 2 hours ago | parent | prev | next [-]

> shared, dedicated hosted hardware at full utilization

I must say that the largest dedicated hosted hardware providers now, like Amazon or Google, to a large extent do not produce the software they are offering as a hosted solution (like Linux, Postgres, Redis, Python, Node, etc). Similarly I'm not sure if the producers of the frontier models are going to keep their lead as the service providers for the most widely used models. They would need to have quite a bit of an edge above open-weights models.

Also, models are given very sensitive data to process. For large organizations, the shared dedicated hardware may look like a few (dozens of) racks in a datacenter, rented by a particular company and not shared with any other tenants.

dandellion 21 minutes ago | parent | prev | next [-]

> The RAM requirements alone are extraordinary.

At the same time, $100 a month is A LOT of RAM.

harrall 2 hours ago | parent | prev | next [-]

You can now buy 128 GB unified memory computers from AMD as commodity.

They’re still pricey, the world is still scaling up memory production, and a lot of code isn’t yet built for AMD, but we went from the Wright’s brothers first airplane to jet engines in 27 years.

I’m not sure “it’s only a few years away” but we are sure moving there fast.

nine_k 2 hours ago | parent | next [-]

> first airplane to jet engines in 27 years.

Nitpick: more like 36 years, from Wright Flyer in 1903 to Heinkel 178 in 1939. Still quite impressive.

Traubenfuchs 2 hours ago | parent | prev [-]

I believe the same thing but keep repeating the question: Then what are all the datacenters for?

moregrist 2 hours ago | parent | next [-]

Non-cynically: the frontier providers have a projection for demand.

Cynically: it’s become an executive-level gpu measuring contest. If you’re not making huge commitments on data centers, you can’t be a serious player.

Realistically: It’s a mix of the two. The recent Claude caps for agentic usage suggest that demand exceeded their immediate compute supply. That they can alleviate it with additional capacity from the existing and small-ish xAI facility suggests that either demand may not be rising quite as fast as anticipated, that they’re okay in the short term until more capacity comes online, or a mix of both.

Open questions:

1. At what price point does demand fall, and are the frontier providers overall profitable before that price point?

2. At what price/performance point do on-prem local models make more sense than cloud models?

harrall 2 hours ago | parent | prev | next [-]

I print documents and photos at home regularly but I still contract out to dedicated print shops.

The print shop can’t replicate the practicality of local printing and I can’t replicate their scale of investment. Both coexist perfectly.

nnoremap 2 hours ago | parent [-]

Print-outs are a physical good. Tokens aren't.

bluGill an hour ago | parent [-]

They are both fungible. You can replace one with the other.

chris_money202 2 hours ago | parent | prev [-]

Agents

simooooo 31 minutes ago | parent | prev | next [-]

Qwen 3.6 is virtually indistinguishable from Claude on my 5090

iwontberude 2 hours ago | parent | prev | next [-]

I strongly disagree. Humans are so insanely well incentivized here with trillions in market share to make localized AI good enough and that’s the only benchmark they need.

SkiFire13 an hour ago | parent [-]

Are they? I don't believe there's that big of a market for local AI. Most people don't care that much, and you'll most likely lose the advertising revenue.

GenerWork 26 minutes ago | parent [-]

>I don't believe there's that big of a market for local AI. Most people don't care that much,

I agree that the market for local AI is basically limited to nerds at this point, but that's because nobody's really explained why local AI is a good thing and also because the vast majority of people need the $20 paid plan at most. How much time and money would it take to get something half as good as OpenAIs products running locally?

mycall 7 minutes ago | parent [-]

It will take another [human] generation before AI is well integrated into everyone's daily lives where people will expect a local model handling things for them. I don't think the killer app has arrived yet (OC is a hint of what is to come).

leptons 2 hours ago | parent | prev [-]

>running large models on shared, dedicated hosted hardware at full utilization is going to be vastly more cost-efficient for the foreseeable future.

That is only true right now because hundreds of billions of dollars are being burned by these AI companies to try to win market share. If you paid what it actually cost, your comment would likely be very different.

jazzyjackson 2 hours ago | parent | next [-]

No, it's economies of scale and I don't understand where anyone is coming from that thinks they'll be better off buying their own hardware, why would you get a better deal on MATMULs/watt than the cloud providers ?

salawat 2 hours ago | parent | next [-]

Another victim of Goldratt's Theory of Constraints. Some things are more important to optimize for than MATMULs per Watt. What that is I leave as an exercise to the student. May you realize what it is before it is too late.

jazzyjackson an hour ago | parent [-]

Some individuals will choose some $10,000 hardware so they can keep freedom and privacy and that's well and good, my point is just that freedom and privacy is not what wins marketshare, and hence, IMHO, local LLMs are not going to catch up and surpass frontier models like some in this thread like to claim

esseph an hour ago | parent [-]

> freedom and privacy is not what wins marketshare

Digital sovereignty laws may mandate/remove access to LLMs of other countries on economic and national security grounds.

esseph an hour ago | parent | prev [-]

Within 5-10 years you're going to see a box like one of those AMD Halo nodes running homes.

They'll be controlling lights and temperature, they'll be adding calendar reminders that show up on your phone and your fridge. Your phone and devices might sync pictures and videos there instead of the large cloud providers. They'll also be a media server, able to stream and multiplex whatever content you want through the home. They'll also be a VPN endpoint, likely your home router, maybe also a wifi access point.

I think this makes quite a bit of sense. I don't think they'll be ubiquitous, but they could be.

This distributes the power demand where local solar generation can supplement , gives the home user a lot of control, and claims overship of the user data from big tech.

Maybe I'm imagining things but this is what I think is coming.

It's the lmm/data heart of the home. A useful digital tool.

5 minutes ago | parent [-]
[deleted]
scheme271 2 hours ago | parent | prev [-]

We don't know the parameters but it probably takes at least a H100 and possibly several to run a SOTA model. Given the pricing (25+k per H100 + hardware to run it) and power (700W per H100 + hardware to run it), I don't see how anyone except for a largish company can afford to run this.

sshumaker an hour ago | parent [-]

Are you serious? It’s multiple nodes to run a frontier model (a node is 8x GPUs), and they aren’t running on H100s. You are looking at 32+ GPUs.