| ▲ | pja 4 hours ago | |
> I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days? Sure. Llama.cpp will happily run these kinds of LLMs using either HIP or Vulcan. Vulkan is easier to get going using the Mesa OSS drivers under Linux, HIP might give you slightly better performance. | ||