Remix.run Logo
Gigachad 5 hours ago

We still aren't going to be putting 200gb ram on a phone in a couple years to run those local models.

mh- 5 hours ago | parent | next [-]

A lot of people are making the mistake of noticing that local models have been 12-24 months behind SotA ones for a good portion of the last couple years, and then drawing a dotted line assuming that continues to hold.

It simply.. doesn't. The SotA models are enormous now, and there's no free lunch on compression/quantization here.

Opus 4.6 capabilities are not coming to your (even 64-128gb) laptop or phone in the popular architecture that current LLMs use.

Now, that doesn't mean that a much narrower-scoped model with very impressive results can't be delivered. But that narrower model won't have the same breadth of knowledge, and TBD if it's possible to get the quality/outcomes seen with these models without that broad "world" knowledge.

It also doesn't preclude a new architecture or other breakthrough. I'm simply stating it doesn't happen with the current way of building these.

edit: forgot to mention the notion of ASIC-style models on a chip. I haven't been following this closely, but last I saw the power requirements are too steep for a mobile device.

am17an 4 hours ago | parent | next [-]

Don’t underestimate the march of technology. Just look at your phone, it has more FLOPS than there were in the entire world 40 years ago.

kuboble 4 hours ago | parent | next [-]

And I think it's very likely that with improved methods you could get opus 4.6 level performance on a wrist watch in few years.

You needed supercomputer to win in chess until you didn't.

Currently local models performance in natural language is much better than any algorithm running on a super computer cluster just few years ago.

root_axis 2 hours ago | parent | prev | next [-]

Yeah, but that's the current state of the art after decades of aggressive optimizations, there's no foreseeable future where we'll ever be able to cram several orders of magnitude more ram into a phone.

TeMPOraL 38 minutes ago | parent [-]

We already cram several orders of magnitude more flash storage into phone than RAM (e.g. my phone has 16 GB RAM but 1 TB storage); even now, with some smart coding, if you don't need all that data at the same time for random access at sub millisecond speed, it's hard to tell the difference.

vrighter 31 minutes ago | parent | prev [-]

but it doesn't have that much more flops than it did a couple of years ago.

baq 2 hours ago | parent | prev | next [-]

Pretty sure there’s at least a couple orders of magnitude in purely algorithmic areas of LLM inference; maybe training, too, though I’m less confident here. Rationale: meat computers run on 20W, though pretraining took a billion years or so.

4 hours ago | parent | prev | next [-]
[deleted]
colechristensen 4 hours ago | parent | prev [-]

There's been plenty of free lunch shrinking models thus far with regards to capability vs parameter count.

Contradicting that trend takes more than "It simply.. doesn't."

There's plenty of room for RAM sizes to double along with bus speed. It idled for a long time as a result of limited need for more.

jurmous 4 hours ago | parent | prev [-]

We don’t need 200gb of RAM on a phone to run big models. Just 200 GB of storage thanks to Apple’s “LLM in a flash” research.

See: https://x.com/danveloper/status/2034353876753592372

adrian_b 2 hours ago | parent [-]

Yes, I agree that this is the right solution, because for a locally-hosted model I value more the quality of the output than the speed with which it is produced, so I prefer the models as they were originally trained, not with further quantizations.

While that paper praises the Apple advantage in SSD speed, which allows a decent performance for inference with huge models, nowadays SSD speeds equal or greater than that can be achieved in any desktop PC that has dual PCIe 5.0 SSDs, or even one PCIe 5.0 and one PCIe 4.0 SSDs.

Because I had also independently reached this conclusion, like I presume many others, I have just started to work a week ago on modifying llama.cpp to use in an optimal manner weights stored on SSDs, while also batching many tasks, so that they will share each pass through the SSDs. I assume that in the following months we will see more projects in this direction, so the local hosting of very large models will become easier and more widespread, allowing the avoidance of the high risks associated with external providers, like the recent enshittification of Claude Code.