Llama.cpp also has a Vulkan backend that is portable and performant, you don't need to mess with ROCm at all.
Oh yes I know, but "can i compile llama.cpp with rocm" has been my yardstick for how good AMD drivers are for some time.