▲ | Aurornis 13 hours ago | |||||||
> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s Most Apple Silicon is much less than 800 GB/s. The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part. It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge. For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea. | ||||||||
▲ | Rohansi 13 hours ago | parent | next [-] | |||||||
This, and always check benchmarks instead of assuming memory bandwidth is the only possible bottleneck. Apple Silicon definitely does not fully use its advertised memory bandwidth when running LLMs. | ||||||||
▲ | smcleod 11 hours ago | parent | prev [-] | |||||||
As I stated this is the top Qualcomm model we're talking about, not the base which is significantly lower. Given their top model underperforms the most common M4 chip and the M5 is about to be released it's not very impressive at all. Even the old M2 Max in my early 2023 MacBook Pro has 400GB/s. | ||||||||
|