▲ | smcleod 14 hours ago | |||||||||||||||||||||||||||||||
Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s and Nvidia cards around 1800GB/s and no word if it supports 256-512GB of memory. | ||||||||||||||||||||||||||||||||
▲ | Aurornis 13 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s Most Apple Silicon is much less than 800 GB/s. The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part. It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge. For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | piskov 14 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Most consumers don’t care about local LLMs anyway. | ||||||||||||||||||||||||||||||||
|