Remix.run Logo
smcleod 14 hours ago

Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s and Nvidia cards around 1800GB/s and no word if it supports 256-512GB of memory.

Aurornis 13 hours ago | parent | next [-]

> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s

Most Apple Silicon is much less than 800 GB/s.

The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part.

It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge.

For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea.

Rohansi 13 hours ago | parent | next [-]

This, and always check benchmarks instead of assuming memory bandwidth is the only possible bottleneck. Apple Silicon definitely does not fully use its advertised memory bandwidth when running LLMs.

smcleod 11 hours ago | parent | prev [-]

As I stated this is the top Qualcomm model we're talking about, not the base which is significantly lower.

Given their top model underperforms the most common M4 chip and the M5 is about to be released it's not very impressive at all.

Even the old M2 Max in my early 2023 MacBook Pro has 400GB/s.

daemonologist 10 hours ago | parent [-]

The base model X2 Elite has memory bandwidth of 152 GB/s. M4 Pro is a modest win against the Extreme as mentioned, and Qualcomm has no M4 Max competitor that I'm aware of.

https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets...

I think the pure hardware specs compare reasonably against AS, aside from the lack of a Max of course. Apple's vertical integration and power efficiency make their product much more compelling though, at least to me. (Qualcomm, call me when the Linux support is good.)

piskov 14 hours ago | parent | prev [-]

Most consumers don’t care about local LLMs anyway.

alphabettsy 14 hours ago | parent [-]

Yet the apps top the App Store charts. Considering that these are not upgradable I think the specs are relevant. Just as I thought Apple shipping systems with 8 GB minimums was not good future proofing.

p_ing 13 hours ago | parent | next [-]

Looking at the Mac App Store in the US, no they don't. There's not an LLM app in sight (local or otherwise).

piskov 14 hours ago | parent | prev [-]

What apps with local llm top app store charts?

happymellon 3 hours ago | parent [-]

They asked ChatGPT.