▲ | slashdev 6 days ago | |||||||||||||
This is a little crude still, but the fact that this is even possible is mind blowing. This has the potential, if progress continues, to break the vendor-locked nightmare that is GPU software and open up the space to real competition between hardware vendors. Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD. To get max performance you likely have to break the abstraction and write some vendor-specific code for each, but that's an optimization problem. You still have a portable kernel that runs cross platform. | ||||||||||||||
▲ | willglynn 6 days ago | parent | next [-] | |||||||||||||
You might be interested in https://burn.dev, a Rust machine learning framework. It has CUDA and ROCm backends among others. | ||||||||||||||
| ||||||||||||||
▲ | bwfan123 6 days ago | parent | prev | next [-] | |||||||||||||
> Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD Not likely in the next decade if ever. Unfortunately, the entire ecosystems of jax and torch are python based. Imagine retraining all those devs to use rust tooling. | ||||||||||||||
▲ | shmerl 5 days ago | parent | prev [-] | |||||||||||||
Do you really need to break the abstraction? Current scenario where SPIR-V is let's say compiled by Mesa into NIR and then NIR is compiled into GPU specific machine code works pretty well, where optimizations can happen on different phases of compilation. |