| ▲ | clusterhacks 15 hours ago | |
You know, I haven't even been thinking about those AMD gpus for local llms and it is clearly a blind spot for me. How is it? I'd guess a bunch of the MoE models actually run well? | ||
| ▲ | stusmall 13 hours ago | parent [-] | |
I've been running local models on an AMD 7800 XT with ollama-rocm. I've had zero technical issues. It's really just the usefulness of a model with only 16GB vram + 64GB of main RAM is questionable, but that isn't an AMD specific issue. It was a similar experience running locally with an nvidia card. | ||