| ▲ | superkuh 5 hours ago |
| AMD hasn't signaled in behavior or words that they're going to actually support ROCm on $specificdevice for more than 4-5 years after release. Sometimes it's as little as the high 3.x years for shrinks like the consumer AMD RX 580. And often the ROCm support for consumer devices isn't out until a year after release, further cutting into that window. Meanwhile nvidia just dropped CUDA/driver support for 1xxx series cards from their most recent drivers this year. For me ROCm's mayfly lifetime is a dealbreaker. |
|
| ▲ | mindcrime 4 hours ago | parent | next [-] |
| Last year, AMD ran a GitHub poll for ROCm complaints and received more than 1,000 responses. Many were around supporting older hardware, which is today supported either by AMD or by the community, and one year on, all 1,000 complaints have been addressed, Elangovan said. AMD has a team going through GitHub complaints, but Elangovan continues to encourage developers to reach out on X where he’s always happy to listen. Seems like they're making some effort in that direction at least. If you have specific concerns, maybe try hitting up Anush Elangovan on Twitter? |
|
| ▲ | SwellJoe 4 hours ago | parent | prev | next [-] |
| Is it really that short? This support matrix shows ROCm 7.2.1 supporting quite old generations of GPUs, going back at least five or six years. I consider longevity important, too, but if they're actively supporting stuff released in 2020 (CDNA), I can't fault them too much. With open drivers on Linux, where all the real AI work is happening, I feel like this is a better longevity story than nvidia...where you're dependent on nvidia for kernel drivers in addition to CUDA. https://rocm.docs.amd.com/en/latest/compatibility/compatibil... |
|
| ▲ | lrvick 4 hours ago | parent | prev | next [-] |
| ROCm is open source and TheRock is community maintained, and in a minute the first Linux distro will have native in-tree builds. It will be supported for the foreseeable future due to AMDs open development approach. It is Nvidia that has the track record of closed drivers and insisting on doing all software dev without community improvements to expected results. |
| |
| ▲ | KennyBlanken 4 hours ago | parent [-] | | > expected results The defacto GPU compute platform? With the best featureset? | | |
| ▲ | lrvick 4 hours ago | parent [-] | | And the worst privacy, transparency, and FOSS integration due to their insistence on a heavily proprietary stack. Also pretty hard to beat a Strix Halo right now in TPS for the money and power consumption. Even that aside there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to. | | |
| ▲ | KennyBlanken 4 hours ago | parent [-] | | > And the worst privacy, transparency, and FOSS integration due to their insistence on a heavily proprietary stack. The market doesn't care about any of that. The consumer market doesn't care, and the commercial market definitely does not. The consumer market wants the most Fortnite frames per second per dollar. The commercial market cares about how much compute they can do per watt, per slot. > there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to. The four percent share of the datacenter market and five percent of the desktop GPU market say (very strongly) otherwise. I have a 100% AMD system in front of me so I'm hardly an NVIDIA fanboy, but you thinking you represent the market is pretty nuts. | | |
| ▲ | lrvick 3 hours ago | parent [-] | | I did not claim to represent the market as a whole, but I feel I likely represent a significant enough segment of it that AMD is going to be just fine. I think local power efficient LLMs are going to make those datacenter numbers less relevant in the long run. |
|
|
|
|
|
| ▲ | canpan 4 hours ago | parent | prev | next [-] |
| I was thinking to get 2x r9700 for a home workstation (mostly inference). It is much cheaper than a similar nvidia build. But still not sure if good value or more trouble. |
| |
| ▲ | stephlow 4 hours ago | parent | next [-] | | I own a single R9700 for the same reason you mentioned, looking into getting a second one. Was a lot of fiddling to get working on arch but RDNA4 and ROCm have come a long way. Every once in a while arch package updates break things but that’s not exclusive to ROCm. LLM’s run great on it, it’s happily running gemma4 31b at the moment and I’m quite impressed. For the amount of VRAM you get it’s hard to beat, apart from the Intel cards maybe. But the driver support doesn’t seem to be that great there either. Had some trouble with running comfyui, but it’s not my main use case, so I did not spent a lot of time figuring that out yet | | |
| ▲ | canpan 3 hours ago | parent [-] | | Thanks for the answer. Brings my hope up. Looking in my local shops, I can get 3 cards for the price of one 5090. May I ask, what kind of tok/s you are getting with the r9700? I assume you got it fully in vram? | | |
| ▲ | theoli 30 minutes ago | parent | next [-] | | I have a dual R9700 machine, with both cards on PCIe gen4 x8 slots. The 256bit GDDR6 memory bandwidth is the main limiting factor and makes dense models above 9b fairly slow. The model that is currently loaded full time for all workloads on this machine is Unsloth's Q3_K_M quant of Qwen 3.5 122b, which has 10b active parameters. With almost no context usage it will generate 59 tok/sec. At 10,000 input tokens it will prefill at about 1500 tok/sec and generate at 51 tok/sec. At 110,000 input tokens it will prefill at about 950 tok/sec and generate at 30 tok/sec. Smaller MoE models with 3b active will push 70 tok/sec at 10,000 context. Dense models like Qwen 3.5 27b and Devstral Small 2 at 24b will only generate at around 13 - 15 tok/sec with 10,000 context. This is all on llama.cpp with the Vulkan backend. I didn't get to far in testing / using anything that requires ROCm because there is an outstanding ROCm bug where the GPU clock stays at 100% (and drawing like 60 watts) even when the model is not processing anything. The issue is now closed but multiple commenters indicate it is still a problem. Using the Vulkan backend my per-card idle draw is between 1 and 2 watts with the display outputs shut down and no kernel frame buffer. | |
| ▲ | jhgorrell an hour ago | parent | prev [-] | | Stock install, no tuning. $uname -r
6.8.0-107-generic
$ollama --version
ollama version is 0.20.2
$ollama run "gemma4:31b" --verbose "write fizzbuzz in python."
[...]
total duration: 45.141599637s
load duration: 143.633498ms
prompt eval count: 21 token(s)
prompt eval duration: 48.047609ms
prompt eval rate: 437.07 tokens/s
eval count: 1057 token(s)
eval duration: 44.676612241s
eval rate: 23.66 tokens/s
|
|
| |
| ▲ | chao- 4 hours ago | parent | prev | next [-] | | Talking to friends who have fought more homelab battles than I ever will, my sense is that (1) AMD has done a better job with RDNA4 than the past generations, and (2) it seems very workload-dependent whether AMD consumer gear is "good value", "more trouble", or both at the same time. Edit: I misread the "2x r9700" as "2 rx9700" which differs from the topic of this comment (about RNDA4 consumer SKUs). I'll keep my comment up, but anyone looking to get Radeon PRO cards can (should?) disregard. | | | |
| ▲ | cyberax 4 hours ago | parent | prev [-] | | I have this setup, with 2x 32Gb cards. It's perfect for my needs, and cheaper than anything comparable from NV. |
|
|
| ▲ | hotstickyballs 4 hours ago | parent | prev [-] |
| Driver support eats directly into driver development |