| ▲ | cpburns2009 7 hours ago | |||||||||||||
In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though. | ||||||||||||||
| ▲ | metalliqaz 6 hours ago | parent [-] | |||||||||||||
Perhaps I should just google it, but I'm under the impression that ollama uses llama.cpp internally, not the other way around. Thanks for that data point I should experiment with ROCm | ||||||||||||||
| ||||||||||||||