Remix.run Logo
doctorpangloss 3 days ago

Hmm, but supposing the accelerated NVIDIA specific inference data types were available for Triton, then you would just use that? Why not contribute to Triton, they accept PRs? Like so what if you do free product ecosystem development for NVIDIA and giant corporations by contributing to Triton?

qeternity 3 days ago | parent [-]

Second line of the post:

> The main objective is to learn writing attention in CUDA C++, since many features are not available in Triton, such as MXFP8 / NVFP4 MMA for sm120.

doctorpangloss 3 days ago | parent [-]

Yes… I read it. If the feature is missing, why not contribute it instead?

almostgotcaught 3 days ago | parent [-]

How many PRs do you have landed in Triton that you can just blithely say "contribute it"?

saagarjha 3 days ago | parent [-]

I mean, you can look at the most recent commit and see that the infrastructure is being built out for this right now (of course OpenAI doesn't care about sm_120, though).

almostgotcaught 3 days ago | parent [-]

i don't know what this comment has to do with my point that OAI doesn't take commits from randoms, especially for infra code.

doctorpangloss 3 days ago | parent | next [-]

By all means, the guy could have written the triton fixes he needs and NOT sent it up stream. It would still make more sense to do that! He’s obviously an expert, and I was sincerely wondering, why bother with the C++ stuff if he already knew the better way, and also has the chops to implement it?

almostgotcaught 3 days ago | parent [-]

There's an enormous difference between writing kernels and writing compiler infra.

saagarjha 3 days ago | parent | prev [-]

Yeah they do