| ▲ | LegNeato 6 days ago |
| Rust is a system language, so you should have the control you need. We intend to bring GPU details and APIs into the language and core / std lib, and expose GPU and driver stuff to the `cfg()` system. (Author here) |
|
| ▲ | vouwfietsman 5 days ago | parent | next [-] |
| > Rust is a system language, so you should have the control you need I don't think this argument is promising. Its not about the power of the language, but the power of the abstractions you provide over the various GPU APIs. In fact, I could argue one of the main selling points of Rust (memory safety) has limited applicability in GPU land, because lifetimes are not a thing like they are in CPU land. I'm sure there's other benefits here, not least the tooling, but certainly the language is not the main selling point... |
|
| ▲ | Voultapher 6 days ago | parent | prev | next [-] |
| Who is we here? I'm curious to hear more about your ambitions here, since surly pulling in wgpu or something similar seems out-of-scope for the traditionally lean Rust stdlib. |
| |
| ▲ | LegNeato 6 days ago | parent [-] | | Many of us working on Rust + GPUs in various projects have discussed starting a GPU working group to explore some of these questions: https://gist.github.com/LegNeato/a1fb3e3a9795af05f22920709d9... Agreed, I don't think we'd ever pull in things like wgpu, but we might create APIs or traits wgpu could use to improve perf/safety/ergonomics/interoperability. | | |
| ▲ | jpc0 6 days ago | parent | next [-] | | Hears an idea, Get Nvidia, AMD, Intel and whoever else you can get into a room. Get LLVMs boys into the same room. Compile LLVMIR directly into hardware instructions fed into the GPU, get them to open up. Having to target an API is part of the problem, get them to allow you to write Rust that directly compiles into the code that will run on the GPU, not something that becomes something else, that becomes spirv that controls a driver that will eventually run on the GPU. | | |
| ▲ | Ygg2 6 days ago | parent | next [-] | | Hell will freeze over, then go into negative Kelvin temperatures before you see nVidia agreeing in earnest to do so. They make too much money on NOT GETTING COMMODITIZED. nVidia even changed CUDA to make API not compatible with interpreters. It's the same reason Safari is in such a sorry state. Why make web browser better, when it could cannibalize your app store? | | |
| ▲ | ashdksnndck 6 days ago | parent | next [-] | | Hmm. Maybe the opportunity would be more like AMD, Intel, and the various AI labs and big tech get together, and by their powers combined figure out a way to stop giving NVIDIA their margin? | | | |
| ▲ | jpc0 6 days ago | parent | prev | next [-] | | Somehow I want to believe if you get everyone else in the room, and it becomes enough of a market force that nvidia stops selling GPUs because of it, they will change. Cough linux gpu drivers | |
| ▲ | pjmlp 6 days ago | parent | prev | next [-] | | By making Web browser "better" do you mean more ChromeOS like? CUDA is great for Python as well. Maybe Intel and AMD should actually produce something worthwhile using. | | |
| ▲ | Ygg2 5 days ago | parent | next [-] | | > By making Web browser "better" do you mean more ChromeOS like? Whichever part makes Safari completely fail at properly rendering Jira. A task even Firefox can do. | | |
| ▲ | ninkendo 5 days ago | parent [-] | | > Whichever part makes Safari completely fail at properly rendering Jira What evidence do you have that this is Safari’s fault and not Jira’s fault? Give me a web browser and I will write code that will fail in it and work in other browsers. |
| |
| ▲ | pawelmurias 5 days ago | parent | prev [-] | | Better for running web apps. | | |
| ▲ | pjmlp 5 days ago | parent [-] | | As long as they are using Web standards, and not Chrome APIs, I do agree. |
|
| |
| ▲ | shmerl 6 days ago | parent | prev [-] | | Yeah, Nvidia can get lost with their CUDA moat. But AMD should be interested. |
| |
| ▲ | bobajeff 6 days ago | parent | prev | next [-] | | Sounds sort of like the idea behind MLIR and it's GPU dialects. * https://mlir.llvm.org/docs/Dialects/NVGPU/ * https://mlir.llvm.org/docs/Dialects/AMDGPU/ * https://mlir.llvm.org/docs/Dialects/XeGPU/ | | |
| ▲ | jpc0 6 days ago | parent | next [-] | | Very likely something along those lines. Effectively standardise passing operations off to a coprocessor. C++ is moving into that direction with stdexec and the linear algebra library and SIMD. I don’t see why Rust wouldn’t also do that. Effectively why must I write a GPU kernel to have an algorithm execute on the GPU, we’re talking about memory wrangling and linear algebra almost all of the time when dealing with GPU in any way whatsoever. I don’t see why we need a different interface and API layer for that. OpenGL et al abstract some of the linear algebra away from you which is nice until you need to give a damn about the assumptions they made that are no longer valid. I would rather that code be in a library in the language of your choice that you can inspect and understand than hidden somewhere in a driver behind 3 layers of abstraction. | | |
| ▲ | bobajeff 6 days ago | parent | next [-] | | >I would rather that code be in a library in the language of your choice that you can inspect and understand than hidden somewhere in a driver behind 3 layers of abstraction. I agree that, that would be ideal. Hopefully, that can happen one day with c++, rust and other languages. So far Mojo seems to be the only language close to that vision. | |
| ▲ | pjmlp 6 days ago | parent | prev [-] | | Guess which companies have been driving senders / receivers work. |
| |
| ▲ | trogdc 6 days ago | parent | prev [-] | | These are just wrappers around intrinsics that exist in LLVM already. |
| |
| ▲ | mertcikla 5 days ago | parent | prev | next [-] | | LLVM people have been at it for a while now, they got it working on Nvidia and AMD working on apple I believe: https://www.modular.com/ it baffles me that more people haven't heard about them. it's mighty impressive what they have achieved. | |
| ▲ | 6 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | Voultapher 6 days ago | parent | prev | next [-] | | Cool, looking forward to that. It's certainly a good fit for the Rust story overall, given the increasingly heterogenous nature of systems. | |
| ▲ | junon 6 days ago | parent | prev [-] | | I'm surprised there isn't already a Rust GPU WG. That'd be incredible. |
|
|
|
| ▲ | markman 5 days ago | parent | prev | next [-] |
| I wish I could say that my lack of understanding the contents of this article was just ignorance but unfortunately it makes my brain want to explode. There is a back ended compliment in there somewhere. What I mean is you're a smart mofo. |
|
| ▲ | shmerl 6 days ago | parent | prev [-] |
| Do you get any interest from big players like AMD? I'm surprised that they didn't start such initiative, but I guess they can as well back yours. |
| |