▲ | mackopes 3 days ago | ||||||||||||||||||||||||||||||||||||||||
For some time I have a feeling that Apple actually IS playing the hardware game in the age of AI. Even though they are not actively innovating on the AI software or shipping products with AI, their hardware (especially the unified memory) is great for running large models locally. You can't get a consumer-grade GPU with enough VRAM to run a large model, but you can do so with macbooks. I wonder if doubling down on that and shipping devices that let you run third party AI models locally and privately will be their path. If only they made their unified memory faster as that seems to be the biggest bottleneck regarding LLMs and their tk/s performance. | |||||||||||||||||||||||||||||||||||||||||
▲ | ChocolateGod 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
> You can't get a consumer-grade GPU with enough VRAM to run a large model, but you can do so with macbooks. You can if you're willing to trust a modded GPU with leaked firmware from a Chinese backshop | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | gmays 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
True, but Apple is a consumer hardware company, which requires billions of users at their scale. We may care about running LLMs locally, but 99% of consumers don't. They want the easiest/cheapest path, which will always be the cloud models. Spending ~$6k (what my M4 Max cost) every N years since models/HW keep improving to be able to run a somewhat decent model locally just isn't a consumer thing. Nonviable for a consumer hardware business at Apple's scale. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | karmakaze 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
On a hypothetical 70b q4 model, the Ryzen AI Max+ 395 (128GB memory with 96GB allocated to iGPU) delivers ~2–5 tokens/sec, slightly trailing the M4 Max’s ~3–7 tokens/sec. The next generation for AMD I expect can easily catch up to or surpass the M4 Max. A pair of MaxSun/Intel Arc B60 48GB GPUs (dual 24GB B580's on one card) for $1200 each also outperforms the M4 Max. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | csomar 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
This. If we plateau around current SOTA LLM performance and 192/386Gb of memory can run a competitive model, Apple computers could become the new iPhone. They have a unique and unmatched product because of their hardware investment. Of course nobody knows how this will eventually play out. But people without inside information on what these big organizations have/possess, cannot make such predictions. | |||||||||||||||||||||||||||||||||||||||||
▲ | orbifold 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I think it is a given that they are aiming for a fully custom training cluster with custom training chips and inference hardware. That would align well with their abilities and actually isn't too hard to pull off for them given that they have very decent processors, GPUs and NPUs already. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | stefan_ 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
Memory is not in any way or shape some sort of crucial advantage, you were just tricked into thinking that because it's used for market segmentation and nobody would slaughter their datacenter profits cash cow. The inference and god forbid training on consumer Apple hardware is terrible and behind. | |||||||||||||||||||||||||||||||||||||||||
|