Remix.run Logo
jpc0 6 days ago

Hears an idea,

Get Nvidia, AMD, Intel and whoever else you can get into a room. Get LLVMs boys into the same room.

Compile LLVMIR directly into hardware instructions fed into the GPU, get them to open up.

Having to target an API is part of the problem, get them to allow you to write Rust that directly compiles into the code that will run on the GPU, not something that becomes something else, that becomes spirv that controls a driver that will eventually run on the GPU.

Ygg2 6 days ago | parent | next [-]

Hell will freeze over, then go into negative Kelvin temperatures before you see nVidia agreeing in earnest to do so. They make too much money on NOT GETTING COMMODITIZED. nVidia even changed CUDA to make API not compatible with interpreters.

It's the same reason Safari is in such a sorry state. Why make web browser better, when it could cannibalize your app store?

ashdksnndck 6 days ago | parent | next [-]

Hmm. Maybe the opportunity would be more like AMD, Intel, and the various AI labs and big tech get together, and by their powers combined figure out a way to stop giving NVIDIA their margin?

__s 5 days ago | parent [-]

They tried. OpenCL, OpenMP

jpc0 6 days ago | parent | prev | next [-]

Somehow I want to believe if you get everyone else in the room, and it becomes enough of a market force that nvidia stops selling GPUs because of it, they will change. Cough linux gpu drivers

pjmlp 6 days ago | parent | prev | next [-]

By making Web browser "better" do you mean more ChromeOS like?

CUDA is great for Python as well.

Maybe Intel and AMD should actually produce something worthwhile using.

Ygg2 5 days ago | parent | next [-]

> By making Web browser "better" do you mean more ChromeOS like?

Whichever part makes Safari completely fail at properly rendering Jira. A task even Firefox can do.

ninkendo 5 days ago | parent [-]

> Whichever part makes Safari completely fail at properly rendering Jira

What evidence do you have that this is Safari’s fault and not Jira’s fault?

Give me a web browser and I will write code that will fail in it and work in other browsers.

pawelmurias 5 days ago | parent | prev [-]

Better for running web apps.

pjmlp 5 days ago | parent [-]

As long as they are using Web standards, and not Chrome APIs, I do agree.

shmerl 6 days ago | parent | prev [-]

Yeah, Nvidia can get lost with their CUDA moat. But AMD should be interested.

bobajeff 6 days ago | parent | prev | next [-]

Sounds sort of like the idea behind MLIR and it's GPU dialects.

* https://mlir.llvm.org/docs/Dialects/NVGPU/

* https://mlir.llvm.org/docs/Dialects/AMDGPU/

* https://mlir.llvm.org/docs/Dialects/XeGPU/

jpc0 6 days ago | parent | next [-]

Very likely something along those lines.

Effectively standardise passing operations off to a coprocessor. C++ is moving into that direction with stdexec and the linear algebra library and SIMD.

I don’t see why Rust wouldn’t also do that.

Effectively why must I write a GPU kernel to have an algorithm execute on the GPU, we’re talking about memory wrangling and linear algebra almost all of the time when dealing with GPU in any way whatsoever. I don’t see why we need a different interface and API layer for that.

OpenGL et al abstract some of the linear algebra away from you which is nice until you need to give a damn about the assumptions they made that are no longer valid. I would rather that code be in a library in the language of your choice that you can inspect and understand than hidden somewhere in a driver behind 3 layers of abstraction.

bobajeff 6 days ago | parent | next [-]

>I would rather that code be in a library in the language of your choice that you can inspect and understand than hidden somewhere in a driver behind 3 layers of abstraction.

I agree that, that would be ideal. Hopefully, that can happen one day with c++, rust and other languages. So far Mojo seems to be the only language close to that vision.

pjmlp 6 days ago | parent | prev [-]

Guess which companies have been driving senders / receivers work.

trogdc 6 days ago | parent | prev [-]

These are just wrappers around intrinsics that exist in LLVM already.

mertcikla 5 days ago | parent | prev | next [-]

LLVM people have been at it for a while now, they got it working on Nvidia and AMD working on apple I believe: https://www.modular.com/

it baffles me that more people haven't heard about them. it's mighty impressive what they have achieved.

6 days ago | parent | prev [-]
[deleted]