Remix.run Logo
aurareturn 15 hours ago

It isn't going to replace cloud LLMs since cloud LLMs will always be faster in throughput and smarter. Cloud and local LLMs will grow together, not replace each other.

I'm not convinced that local LLMs use less electricity either. Per token at the same level of intelligence, cloud LLMs should run circles around local LLMs in efficiency. If it doesn't, what are we paying hundreds of billions of dollars for?

I think local LLMs will continue to grow and there will be an "ChatGPT" moment for it when good enough models meet good enough hardware. We're not there yet though.

Note, this is why I'm big on investing in chip manufacture companies. Not only are they completely maxed out due to cloud LLMs, but soon, they will be double maxed out having to replace local computer chips with ones that are suited for inferencing AI. This is a massive transition and will fuel another chip manufacturing boom.

raincole 14 hours ago | parent | next [-]

Yep. People were claiming DeepSeek was "almost as good as SOTA" when it came out. Local will always be one step away like fusion.

It's just wishful thinking (and hatred towards American megacorps). Old as the hills. Understandable, but not based on reality.

kortilla 12 hours ago | parent [-]

Don’t try to draw trend lines for an industry that has existed for <5 years.

virtue3 14 hours ago | parent | prev | next [-]

We are 100% there already. In browser.

the webgpu model in my browser on my m4 pro macbook was as good as chatgpt 3.5 and doing 80+ tokens/s

Local is here.

AndroTux 13 hours ago | parent | next [-]

Sir, ChatGPT 3.5 is more than 3 years old, running on your bleeding edge M4 Pro hardware, and only proves the previous commenters point.

AugSun 13 hours ago | parent | prev [-]

It works really well for "You're helpful assistant / Hi / Hello there. how may I help you today?" Anything else (esp in non-EN language) and you will see the limitations yourself. just try it.

mirekrusin 13 hours ago | parent | prev | next [-]

Local RTX 5090 is actually faster than A100/H100.

aurareturn 12 hours ago | parent [-]

It's a $4,000 GPU with 32GB of VRAM and needs a 1,000 watt PSU. It's not realistic for the masses.

If it has something like 80GB of VRAM, it'll cost $10k.

The actual local LLM chip is Apple Silicon starting at the M5 generation with matmul acceleration in the GPU. You can run a good model using an M5 Max 128GB system. Good prompt processing and token generation speeds. Good enough for many things. Apple accidentally stumbled upon a huge advantage in local LLMs through unified memory architecture.

Still not for the masses and not cheap and not great though. Going to be years to slowly enable local LLMs on general mass local computers.

hrmtst93837 13 hours ago | parent | prev | next [-]

You're assuming throughput sets the value, but offline use and privacy change the tradeoff fast.

aurareturn 13 hours ago | parent [-]

Yea I get that there will always be demand for local waifus. I never said local LLMs won't be a thing. I even said it will be a huge thing. Just won't replace cloud.

AugSun 14 hours ago | parent | prev [-]

Looking at downvotes I feel good about SDE future in 3-5 years. We will have a swamp of "vibe-experts" who won't be able to pay 100K a month to CC. Meanwhile, people who still remember how to code in Vim will (slowly) get back to pre-COVID TC levels.

14 hours ago | parent | next [-]
[deleted]
QuantumNomad_ 14 hours ago | parent | prev [-]

What is CC and TC? I have not heard these abbreviations (except for CC to mean credit card or carbon copy, neither of which is what I think you mean here).

Ericson2314 14 hours ago | parent | next [-]

I figured it out from context clues

CC: Claude Code

TC: total comp(ensation)

AugSun 13 hours ago | parent [-]

Thank you for clarifying! (I had no idea it needs to be explained, sorry.)

14 hours ago | parent | prev [-]
[deleted]