Remix.run Logo
Tade0 5 hours ago

Only way to have hardware reach this sort of efficiency is to embed the model in hardware.

This exists[0], but the chip in question is physically large and won't fit on a phone.

[0] https://www.anuragk.com/blog/posts/Taalas.html

tclancy 5 hours ago | parent | next [-]

I think you're ignoring the inevitable march of progress. Phones will get big enough to hold it soon.

tren_hard 3 hours ago | parent | next [-]

Instead of slapping on an extra battery pack, it will be an onboard llm model. Could have lifecycles just like phones.

Getting bigger (foldable) phones, without losing battery life, and running useable models in the same form-factor is a pretty big ask.

RALaBarge 4 hours ago | parent | prev [-]

I think the future is the model becoming lighter not the hardware becoming heavier

Tade0 3 hours ago | parent [-]

The hardware will become heavier regardless I'm afraid.

ottah 5 hours ago | parent | prev | next [-]

That's actually pretty cool, but I'd hate to freeze a models weights into silicon without having an incredibly specific and broad usecase.

patapong 3 hours ago | parent | next [-]

Depends on cost IMO - if I could buy a Kimi K2.5 chip for a couple of hundred dollars today I would probably do it.

4 hours ago | parent | prev | next [-]
[deleted]
whatever1 3 hours ago | parent | prev | next [-]

I mean if it was small enough to fit in an iPhone why not? Every year you would fabricate the new chip with the best model. They do it already with the camera pipeline chips.

superxpro12 3 hours ago | parent | prev [-]

Sounds like just the sort of thing FGPA's were made for.

The $$$ would probably make my eyes bleed tho.

chrsw 3 hours ago | parent | next [-]

Current FPGAs would have terrible performance. We need some new architecture combining ASIC LLM perf and sparse reconfiguration support maybe.

0x457 2 hours ago | parent | prev [-]

Wouldn't it be the opposite of freezing weights?

intrasight 5 hours ago | parent | prev | next [-]

I think for many reasons this will become the dominant paradigm for end user devices.

Moore's law will shrink it to 8mm soon. I think it'll be like a microSD card you plug in.

Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.

bigyabai 5 hours ago | parent [-]

One big bottleneck is SRAM cost. Even an 8b model would probably end up being hundreds of dollars to run locally on that kind of hardware. Especially unpalatable if the model quality keeps advancing year-by-year.

> Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.

It's amazing to me that people consider this to be more realistic than FAANG collaborating on a CUDA-killer. I guess Nvidia really does deserve their valuation.

intrasight 4 hours ago | parent [-]

> bottleneck is SRAM cost

Not for this approach

4 hours ago | parent [-]
[deleted]
ankaz 2 hours ago | parent | prev [-]

[dead]