| ▲ | ashwinnair99 6 hours ago |
| A year ago this would have been considered impossible. The hardware is moving faster than anyone's software assumptions. |
|
| ▲ | cogman10 6 hours ago | parent | next [-] |
| This isn't a hardware feat, this is a software triumph. They didn't make special purpose hardware to run a model. They crafted a large model so that it could run on consumer hardware (a phone). |
| |
| ▲ | pdpi 6 hours ago | parent | next [-] | | It's both. We haven't had phones running laptop-grade CPUs/GPUs for that long, and that is a very real hardware feat. Likewise, nobody would've said running a 400b LLM on a low-end laptop was feasible, and that is very much a software triumph. | | |
| ▲ | bigyabai 5 hours ago | parent [-] | | > We haven't had phones running laptop-grade CPUs/GPUs for that long Agree to disagree, we've had laptop-grade smartphone hardware for longer than we've had LLMs. | | |
| ▲ | pdpi 4 hours ago | parent [-] | | Kind of. We've had solid CPUs for a while, but GPUs have lagged behind (and they're the ones that matter for this particular application). iPhones still lead by a comfortable margin on this front, but have historically been pretty limited on the IO front (only supported USB2 speeds until recently). |
|
| |
| ▲ | smallerize 6 hours ago | parent | prev | next [-] | | The iPhone 17 Pro launched 8 months ago with 50% more RAM and about double the inference performance of the previous iPhone Pro (also 10x prompt processing speed). | | | |
| ▲ | SV_BubbleTime 4 hours ago | parent | prev | next [-] | | >triumph It’s been a lot of years, but all I can hear after reading that is … I’m making a note here, huge success | | | |
| ▲ | anemll 3 hours ago | parent | prev [-] | | both, tbh |
|
|
| ▲ | mannyv 5 hours ago | parent | prev | next [-] |
| The software has real software engineers working on it instead of researchers. Remember when people were arguing about whether to use mmap? What a ridiculous argument. At some point someone will figure out how to tile the weights and the memory requirements will drop again. |
| |
| ▲ | snovv_crash 5 hours ago | parent [-] | | The real improvement will be when the software engineers get into the training loop. Then we can have MoE that use cache-friendly expert utilisation and maybe even learned prefetching for what the next experts will be. | | |
| ▲ | zozbot234 5 hours ago | parent [-] | | > maybe even learned prefetching for what the next experts will be Experts are predicted by layer and the individual layer reads are quite small, so this is not really feasible. There's just not enough information to guide a prefetch. | | |
| ▲ | yorwba 4 hours ago | parent | next [-] | | It's feasible to put the expert routing logic in a previous layer. People have done it: https://arxiv.org/abs/2507.20984 | |
| ▲ | snovv_crash 4 hours ago | parent | prev [-] | | Manually no. It would have to be learned, and making the expert selection predictable would need to be a training metric to minimize. | | |
| ▲ | zozbot234 4 hours ago | parent [-] | | Making the expert selection more predictable also means making it less effective. There's no real free lunch. |
|
|
|
|
|
| ▲ | Aurornis 5 hours ago | parent | prev | next [-] |
| It wasn't considered impossible. There are examples of large MoE LLMs running on small hardware all over the internet, like giant models on Raspberry Pi 5. It's just so slow that nobody pursued it seriously. It's fun to see these tricks implemented, but even on this 2025 top spec iPhone Pro the output is 100X slower than output from hosted services. |
| |
| ▲ | zozbot234 5 hours ago | parent [-] | | If the bottleneck is storage bandwidth that's not "slow". It's only slow if you insist on interactive speeds, but the point of this is that you can run cheap inference in bulk on very low-end hardware. | | |
| ▲ | Aurornis 3 hours ago | parent | next [-] | | > If the bottleneck is storage bandwidth that's not "slow" It is objectively slow at around 100X slower than what most people consider usable. The quality is also degraded severely to get that speed. > but the point of this is that you can run cheap inference in bulk on very low-end hardware. You always could, if you didn't care about speed or efficiency. | | |
| ▲ | zozbot234 2 hours ago | parent [-] | | You're simply pointing out that most people who use AI today expect interactive speeds. You're right that the point here is not raw power efficiency (having to read from storage will impact energy per operation, and datacenter-scale AI hardware beats edge hardware anyway by that metric) but the ability to repurpose cheaper, lesser-scale hardware is also compelling. |
| |
| ▲ | Terretta 4 hours ago | parent | prev [-] | | > very low-end hardware iPhone 17 Pro outperforms AMD’s Ryzen 9 9950X per https://www.igorslab.de/en/iphone-17-pro-a19-pro-chip-uebert... | | |
|
|
|
| ▲ | t00 3 hours ago | parent | prev | next [-] |
| /FIFY A year ago this would have been considered impossible. The software is moving faster than anyone's hardware assumptions. |
|
| ▲ | ottah 4 hours ago | parent | prev | next [-] |
| I mean, by any reasonable standard it still is. Almost any computer can run an llm, it's just a matter of how fast, and 0.4k/s (peak before first token) is not really considered running. It's a demo, but practically speaking entirely useless. |
| |
| ▲ | alephnerd 4 hours ago | parent [-] | | Devils advocate - this actually shows how promising TinyML and EdgeML capabilities are. SoCs comparable to the A19 Pro are highly likely to be commodified in the next 3-5 years in the same manner that SoCs comparable to the A13 already are. |
|
|
| ▲ | iberator 3 hours ago | parent | prev [-] |
| Does iPhone have some kind of hardware acceleration for neural netwoeks/ai ? |