Remix.run Logo
buyucu 5 hours ago

I bought an GMKtec evo 2 that is a 128 GB unified memory system. Strong recommend.

Keyframe an hour ago | parent | next [-]

That's AMD Ryzen AI Max+ 395, right? Lots of those boxes popping up recently, but isn't that dog slow? And I can't believe I'm saying this - but maybe RAM filled-up mac might be a better option?

ricardobeat 10 minutes ago | parent | next [-]

Yes, but the mac costs 3-4x more. You can get one of these 395 systems with 96GB for ~1k.

Keyframe 5 minutes ago | parent [-]

When I was looking it was more like 1.6k euros, but still great price. Mac studio with M4 Mac 16/40/16 with 128GB is double that. That's all within a range of "affordable". Now, if it's at least twice the speed, I don't see a reason not to. Even though my religion is against buying a mac as well.

buyucu 35 minutes ago | parent | prev [-]

I'm not buying a Mac. Period.

te0006 2 hours ago | parent | prev [-]

Interesting - do you need to take any special measures to get OSS genAI models to work on this architecture? Can you use inference engines like Ollama and vLLM off-the-shelf (as Docker containers) there, with just the Radeon 8060S GPU? What token rates do you achieve?

(edit: corrected mistake w.r.t. the system's GPU)

buyucu 35 minutes ago | parent [-]

I just use llama.cpp. It worked out of the box.