Huh I just moved to AMDVLK yesterday, after learning that it has 50% more PP on llama.cpp compared to RADV: https://www.reddit.com/r/LocalLLaMA/comments/1nabcek/comment...