|
| ▲ | Philpax 5 days ago | parent | next [-] |
| Hence the "if" :-) ROCm is getting some adoption, especially as some of the world's largest public supercomputers have AMD GPUs. Some of this is also being solved by working at a different abstraction layer; you can sometimes be ignorant to the hardware you're running on with PyTorch. It's still leaky, but it's something. |
| |
| ▲ | Q6T46nT668w6i3m 5 days ago | parent | next [-] | | Look at the state of PyTorch’s CI pipelines and you’ll immediately see that ROCm is a nightmare. Especially nowadays when TPU and MPS, while missing features, rarely create cascading failures throughout the stack. | |
| ▲ | physicsguy 5 days ago | parent | prev | next [-] | | I still don't see ROCm as that serious a threat, they're still a long way behind in library support. I used to use ROCFFT as an example, it was missing core functionality that cuFFT has had since like 2008. It looks like they've finally caught up now, but that's one library among many. | |
| ▲ | j45 5 days ago | parent | prev [-] | | Waiting just adds more dust to the skills pile. Programming languages are groups of syntax. |
|
|
| ▲ | einpoklum 5 days ago | parent | prev | next [-] |
| Talking about hardware rather than software, you have AMD and Intel. And - if your platform is not x86_64, NVIDIA is probably not even one of the competitors; and you have ARM, Qualcomm, Apple, Samsung and probably some others. |
|
| ▲ | sdenton4 5 days ago | parent | prev [-] |
| ...Well, the article compares GPUs to tpus, made by a competitor you probably know the name of... |