Remix.run Logo
AdventureMouse 6 days ago

> If the M5 generation gets this GPU upgrade, which I don't see why not, then the era of viable local LLM inferencing is upon us.

I don't think local LLMs will ever be a thing except for very specific use cases.

Servers will always have way more compute power than edge nodes. As server power increases, people will expect more and more of the LLMs and edge node compute will stay irrelevant since their relative power will stay the same.

seanmcdirmid 6 days ago | parent | next [-]

LocalLLMs would be useful for low latency local language processing/home control, assuming they ever become fast enough where the 500ms to 1s network latency becomes a dominate factor in having a fluid conversation with a voice assistant. Right now the pauses are unbearable for anything but one way commands (Siri, do something! - 3 seconds later it starts doing the thing...that works but it wouldn't work if Siri needed to ask follow up questions). This is even more important if we consider low latency gaming situations.

Mobile applications are also relevant. An LLM in your car could be used for local intelligence. I'm pretty sure self driving cars use some about of local AI already (although obviously not LLM, and I don't really know how much of their processing is local vs done on a server somewhere).

If models stop advancing at a fast clip, hardware will eventually become fast and cheap enough that running models locally isn't something we think about as being a non-sensical luxury, in the same way that we don't think that rendering graphics locally is a luxury even though remote rendering is possible.

dgacmu 5 days ago | parent [-]

Network latency in most situations is not 500ms. The latency from New York California is under 70ms, and if you add in some transmission time you're still under 200ms. And that's ignoring that an NYC request will probably go only to VA (sub-15ms).

Even over LTE you're looking at under 120ms coast to coast.

seanmcdirmid 5 days ago | parent [-]

You have to take any of those numbers and multiply them by two, since you have to go there and then back again.

dgacmu 2 days ago | parent [-]

No, those were round trip times already

jameshart 6 days ago | parent | prev | next [-]

> Servers will always have way more compute power than edge nodes

This doesn't seem right to me.

You take all the memory and CPU cycles of all the clients connected to a typical online service, compared to the memory and CPU in the datacenter serving it? The vast majority of compute involved in delivering that experience is on the client. And there's probably vast amounts of untapped compute available on that client - most websites only peg the client CPU by accident because they triggered an infinite loop in an ad bidding war; imagine what they could do if they actually used that compute power on purpose.

But even doing fairly trivial stuff, a typical browser tab is using hundreds of megs of memory and an appreciable percentage of the CPU of the machine it's loaded on, for the duration of the time it's being interacted with. Meanwhile, serving that content out to the browser took milliseconds, and was done at the same time as the server was handling thousands of other requests.

Edge compute scales with the amount of users who are using your service: each of them brings along their own hardware. Server compute has to scale at your expense.

Now, LLMs bring their special needs - large models that need to be loaded into vast fast memory... there are reasons to bring the compute to the model. But it's definitely not trivially the case that there's more compute in servers than clients.

arghwhat 6 days ago | parent [-]

The sum of all edge nodes exceed the power in the datacenter, but the peak power provided to you from the datacenter significantly exceed your edge node capabilities.

A single datacenter machine with state of the art GPUs serving LLM inference can be drawing in the tens of kilowatts, and you borrow a sizable portion for a moment when you run a prompt on the heavier models.

A phone that has to count individual watts, or a laptop that peaks on dual digit sustained draw, isn't remotely comparable, and the gap isn't one or two hardware features.

pdpi 6 days ago | parent | prev | next [-]

As an industry, we've swung from thin clients to fat clients and back countless times. I'm sure LLMs won't be immune to that phenomenon.

meltyness 6 days ago | parent [-]

I adore this machinery, there's a lot of money riding on the idea that interest in AI/ML will result in the value being in owning bunch of big central metal like cloud era has produced, but I'm not so sure.

SturgeonsLaw 6 days ago | parent [-]

I'm sure the people placing multibillion dollar bets have done their research, but the trends I see are AI getting more efficient and hardware getting more powerful, so as time goes on, it'll be more and more viable to run AI locally.

Even with token consumption increasing as AI abilities increase, there will be a point where AI output is good enough for most people.

Granted, people are very willing to hand over their data and often money to rent a software licence from the big players, but if they're all charging subscription fees where a local LLM costs nothing, that might cause a few sleepless nights for a few execs.

meltyness 6 days ago | parent | next [-]

tts would be an interesting case-study. it hasn't really been in the lime-light, so could serve as a leading indicator for what will happen when attention to text generation inevitably wanes

I use Read Aloud across a few browser platforms cause sometimes I don't care to read an article I have some passing interest in.

The landscape is a mess:

it's not really bandwidth efficient to transmit on one count, local frameworks like Piper perform well in alot of cases, there's paid APIs from the big players, at least one player has incorporated api-powered neural tts and packaged it into their browser presumably ad-supported or something, yet another has incorporated into their OS, already (though it defaults to speak and spell for god knows why). I'm not willing to pay $0.20 per page though, after experimenting, especially when the free/private solution is good enough.

impure-aqua 6 days ago | parent | prev [-]

We could potentially see one-time-purchase model checkpoints, where users pay to get a particular version for offline use, and future development is gated behind paying again- but certainly the issue of “some level of AI is good enough for most users” might hurt the infinite growth dreams of VCs

Closi 5 days ago | parent | prev | next [-]

IMO the benefit of a local LLM on a smartphone isn't necessarily compute power/speed - it's reliability without a reliance on connectivity, it can offer privacy guarantees, and assuming the silicon cost is marginal, could mean you can offer permanent LLM capabilities without needing to offer some sort of cloud subscription.

hapticmonkey 6 days ago | parent | prev | next [-]

If the future is AI, then a future where every compute has to pass through one of a handful of multinational corporations with GPU farms...is something to be wary of. Local LLMs is a great idea for smaller tasks.

tonyhart7 6 days ago | parent [-]

but its not the future, we already can do that right now

the problem is people expectation, they want the model to be smart

people aren't having problem for if its local or not, but they want the model to be useful

aurareturn 5 days ago | parent [-]

Sure, that's why local LLMs aren't popular or mass market as of September 2025.

But cloud models will have diminishing returns, local hardware will get drastically faster, and techniques to efficiently inference them will be worked out further. At some point, local LLMs will have its day.

tonyhart7 5 days ago | parent [-]

only in theory and that's not gonna be happening

this is the same happening with software and game industry

because free market forces people to raise the bar every year, the requirement of apps and games never met. its only goes up

human would never be satisfied, boundary would be push further

that's why we have 12gb or 16gb ram for smartphone right now only for system + apps

and now we must accommodate for local LLM too??? it would only goes up, people would demand smarter and smarter model

frontier model today would deem unusable(dumb) in 5 years

example: people literally screaming in agony when Antrophic quantized their model

Nevermark 6 days ago | parent | prev | next [-]

Boom! [0]

> Deepseek-r1 was loaded and ran locally on the Mac Studio

> M3 Ultra chip [...] 32-core CPU, an 80-core GPU, and the 32-core Neural Engine. [...] 512GB of unified memory, [...] memory bandwidth of 819GB/s.

> Deepseek-r1 was loaded [...] 671-billion-parameter model requiring [...] a bit less than 450 gigabytes of [unified] RAM to function.

> the Mac Studio was able to churn through queries at approximately 17 to 18 tokens per second

> it was observed as requiring 160 to 180 Watts during use

Considering getting this model. Looking into the future, a Mac Studio M5 Ultra should be something special.

[0] https://appleinsider.com/articles/25/03/18/heavily-upgraded-...

bigyabai 5 days ago | parent [-]

"Maybe Apple will disprove you in the future" isn't a great refutation of the parent's point.

evilduck 4 days ago | parent [-]

"Servers are more powerful" isn't a super strong point. Why aren't all PC gamers rendering games on servers if raw power was all that mattered? Why do workstation PCs even exist?

Society is already giving pushback to AI being pushed on them everywhere; see the rise of the word "clanker". We're seeing mental health issues pop up. We're all tired of AI slop content and engagement bait. Even the developers like us discussing it at the bleeding edge go round in circles with the same talking points reflexively. I don't see it as a given that there's public demand for even more AI, "if only it were more powerful on a server".

bigyabai 4 days ago | parent [-]

You make a good point, but you're still not refuting the original argument. The demand for high-power AI still exists, the products that Apple sells today do not even come close to meaningfully replacing that demand. If you own an iPhone, you're probably still using ChatGPT.

Speaking to your PC gaming analogy, there are render farms for graphics - they're just used for CGI and non-realtime use cases. What there isn't a huge demand for is consumer-grade hardware at datacenter prices. Apple found this out the hard way shipping Xserve prematurely.

evilduck 3 days ago | parent | next [-]

> Speaking to your PC gaming analogy, there are render farms for graphics - they're just used for CGI and non-realtime use cases. What there isn't a huge demand for is consumer-grade hardware at datacenter prices.

Right, and that's despite the datacenter hardware being far more powerful and for most people cheaper to use per hour than the TCO of owning your own gaming rig. People still want to own their computer and want to eliminate network connectivity and latency being a factor even when it's generally a worse value prop. You don't see any potential parallels here with local vs hosted AI?

Local models on consumer grade hardware far inferior to buildings full of GPUs can already competently do tool calling. They can already generate tok/sec far beyond reading speed. The hardware isn't serving 100s of requests in parallel. Again, it just doesn't seem far fetched to think that the public will sway away from paying for more subscription services for something that can basically run on what they already own. Hosted frontier models won't go away, they _are_ better at most things, but can all of these companies sustain themselves as businesses if they can't keep encroaching into new areas to seek rent? For the average ChatGPT user, local Apple Intelligence and Gemma 3n basically already have the skills and smarts required, they just need more VRAM, and access to RAG'd world knowledge and access to the network to keep up.

pdimitar 3 days ago | parent | prev [-]

> The demand for high-power AI still exists, the products that Apple sells today do not even come close to meaningfully replacing that demand.

Correct, though to me it seems that this comes at the price of narrowing the target audience (i.e. devs and very high-demanding analysis + production work).

For almost everything else people just open a bookmarked ChatGPT / Gemini link and let it flow, no matter how erroneous it might be.

The AI area is burning a lot of bridges and has done so for the last 1.5 - 2.0 years; they solidify the public's idea that they only peddle subscription income as hard as they can without providing more value.

Somebody finally had the right idea some months ago: sub-agents. Took them a while, and it was obvious right from the start that just dumping 50 pages on your favorite LLM is never going to produce impressive results. I mean, sometimes it does but people do a really bad job at quickly detecting when it does not, and are slow to correct course and just burn through tokens and their own patience.

Investors are gonna keep investor-ing, they will of course want the paywall and for there to be no open models at all. But happily the market and even general public perception are pushing back.

I am really curious what will come out of all this. One prediction is local LLMs that secretly transmit to the mothership, so the work of the AI startup is partially offloaded to its users. But I am known to be very cynical, so take this with a spoonful of salt.

waterTanuki 6 days ago | parent | prev | next [-]

I regularly use local LLMs at work (full stack dev) due to restrictions and occasionally I get some results comparable to gpt-5 or opus 4

eprparadox 6 days ago | parent [-]

this is really cool. could you say a bit about your setup (which llms, what tasks they’re best for, etc)?

waterTanuki 5 days ago | parent [-]

I switch between gpt-oss:20b/qwen3:30b. Good for green fielding projects, setting up bash scripts, simple CRUD apis using express, and the occasional error in a React or Vue app.

rowanG077 6 days ago | parent | prev | next [-]

That's assuming diminishing returns won't hit hard. If a 10x smaller local model is 95%(Whatever that means) as good as the remote model it makes sense to use local models most of the time. It remains to be seen if that will happen but it's certainly not unthinkable imp.

sigmar 6 days ago | parent [-]

It's really task-dependent, text summarization and grammar corrections are fine with local models. I posit any tasks that are 'arms race-y' (image generation, creative text generation) are going to be offloaded to servers, as there's no 'good enough' bar above which they can't improve.

PaulRobinson 5 days ago | parent | prev | next [-]

Apple literally mentioned local LLMs in the event video where they announced this phone and others.

Apple's privacy stance is to do as much as possible on the user's device and as little as possible in cloud. They have iCloud for storage to make inter-device synch easy, but even that is painful for them. They hate cloud. This is the direction they've had for some years now. It always makes me smile that so many commentators just can't understand it and insist that they're "so far behind" on AI.

All the recent academic literature suggests that LLM capability is beginning to plateau, and we don't have ideas on what to do next (and no, we can't ask the LLMs).

As you get more capable SLMs or LLMs, and the hardware gets better and better (who _really_ wants to be long on nVIDIA or Intel right now? Hmm?), people are going to find that they're "good enough" for a range of tasks, and Apple's customer demographic are going to be happy that's all happening on the device in their hand and not on a server [waves hands] "somewhere", in the cloud.

astrange 5 days ago | parent [-]

It's not difficult to find improvements to LLMs still.

Large issues: tokenizers exist, reasoning models are still next-token-prediction instead of having "internal thoughts", RL post-training destroys model calibration

Small issues: they're all trained to write Python instead of a good language, most of the benchmarks are bad, pretraining doesn't use document metadata (ie they have to learn from each document without being told the URL or that they're written by different people)

fennecfoxy 5 days ago | parent | prev | next [-]

I think they will be, but more for hand-off. Local will be great for starting timers, adding things to calendar, moving files around. Basic, local tasks. But it also needs to be intelligent enough to know when to hand off to server-side model.

Android crowd has been able to run LLMs on-device since LlamaCPP first came out. But the magic is in the integration with OS. As usual there will be hype around Apple, idk, inventing the very concept of LLMs or something. But the truth is neither Apple nor Android did this; only the wee team that wrote the attention is all you need paper + the many open source/hobbyist contributors inventing creative solutions like LoRA and creating natural ecosystems for them.

That's why I find this memo so cool (and will once again repost the link): https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...

brookst 6 days ago | parent | prev | next [-]

Couldn’t you apply that same thinking to all compute? Servers will always have more, timesharing means lower cost, people will probably only ever own dumb terminals?

aydyn 6 days ago | parent [-]

Latency. You cant play video games on the cloud. Google tried and failed.

wcarss 6 days ago | parent | next [-]

well, another way to recount it is that google tried and it worked okay but they decided it wasn't moving the needle, so they stopped trying.

liamwire 6 days ago | parent | prev | next [-]

Huh? GeForce NOW is a resounding success by many metrics. Anecdotally, I use it weekly to play multiplayer games and it’s an excellent experience. Google giving up on Stadia as a product says almost nothing about cloud gaming’s viability.

Balinares 6 days ago | parent | prev [-]

Do you mean Stadia? Stadia worked great. The only perceptible latency I initially had ended up coming from my TV and was fixed by switching it to so-called "gaming mode".

Never could figure out what the heck the value proposition was supposed to be though. Pay full price for a game that you can't even pretend you own? I don't think so. And the game conservation implications were also dire, so I'm not sad it went away in the end.

But on technical merits? It worked great.

aydyn 5 days ago | parent [-]

No it did not.

alwillis 3 days ago | parent | prev | next [-]

> don't see why not, then the era of viable local LLM inferencing is upon us. I don't think local LLMs will ever be a thing except for very specific use cases.

I disagree.

There's a lot of interest in local LLMs in the LLM community. My internet was down for a few days and did I wish I had a local LLM on my laptop!

There's a big push for privacy; people are using LLMs for personal medical issues for example and don't want that going into the cloud.

Is it necessary to talk to a server just to check out a letter I wrote?

Obviously with Apple's release of iOS 26 and macOS 26 and the rest of their operating systems, tens of millions of devices are getting a local LLM with 3rd party apps that can take advantage of them.

MPSimmons 6 days ago | parent | prev | next [-]

The crux is how big the L is in the local LLMs. Depending on what it's used for, you can actually get really good performance on topically trained models when leveraged for their specific purpose.

rickdeckard 6 days ago | parent [-]

There's alot of L's in LLLM, so overall it's hard to tell what you're trying to say...

Is it 'Local'?, 'Large?'...'Language?'

fennecfoxy 5 days ago | parent | next [-]

Clearly the Large part, given the context...LLMs usually miss stuff like this, funnily enough.

touristtam 6 days ago | parent | prev | next [-]

Do you see the C for Cheap in there? Me neither.

rickdeckard 6 days ago | parent [-]

Sorry I'm not following. Cheap in terms of what, hardware cost?

From Apple's point of view a local model would be the cheapest possible to run, as the end-user pays for hardware plus consumption...

triceratops 5 days ago | parent | prev [-]

Username checks out.

unethical_ban 6 days ago | parent | prev | next [-]

It's a thing right now.

I'm running Qwen 30B code on my framework laptop to ask questions about ruby vs. python syntax because I can, and because the internet was flaky.

At some point, more doesn't mean I need it. LLMs will certainly get "good enough" and they'll be lower latency, no subscription, and no internet required.

nsonha 6 days ago | parent [-]

pretty amazing, as a student I remember downloading offline copies of Wikipedia and Stack Overflow and felt that I have the entire world truly in my laptop and phones. Local LLMs are arguably even more useful than those archives.

hotstickyballs 6 days ago | parent | prev | next [-]

If compute power is the deciding factor server vs edge discussion then we’d never have smartphones.

nsonha 6 days ago | parent | prev [-]

local LLM may not be good enough for answering questions (which I think won't be true really soon) or generating images, but it today should be good enough to infer deeplinks and app extension calls or agentic walk-through... and ushers a new era of controlling phone by voice command.

gnopgnip 6 days ago | parent [-]

You can generate images on an iphone now with “draw things”

6 days ago | parent [-]
[deleted]