| ▲ | BadBadJellyBean 3 hours ago | |||||||
I am not on reddit. What are they saying? | ||||||||
| ▲ | mapontosevenths 2 hours ago | parent [-] | |||||||
It isn't for "running models." Inference workloads like that are faster on a mac studio, if that's the goal. Apple has faster memory. These devices are for AI R&D. If you need to build models or fine tune them locally they're great. That said, I run GPT-OSS 120B on mine and it's 'fine'. I spend some time waiting on it, but the fact that I can run such a large model locally at a "reasonable" speed is still kind of impressive to me. It's REALLY fast for diffusion as well. If you're into image/video generation it's kind of awesome. All that compute really shines when for workloads that aren't memory speed bound. | ||||||||
| ||||||||