Remix.run Logo
cpburns2009 7 hours ago

In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though.

metalliqaz 6 hours ago | parent [-]

Perhaps I should just google it, but I'm under the impression that ollama uses llama.cpp internally, not the other way around.

Thanks for that data point I should experiment with ROCm

cpburns2009 6 hours ago | parent | next [-]

I meant ollama uses llama.cpp internally. Sorry for the confusion.

naasking 4 hours ago | parent | prev [-]

From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time.