| |
| ▲ | subharmonicon a day ago | parent | next [-] | | The blog post is about using an NVIDIA-specific tensor core API that they have built to get good performance. Modular has been pushing the notion that they are building technology that allows writing HW-vendor neutral solutions so that users can break free of NVIDIA's hold on high performance kernels. From their own writing: > We want a unified, programmable system (one small binary!) that can scale across architectures from multiple vendors—while providing industry-leading performance on the most widely used GPUs (and CPUs). | | |
| ▲ | totalperspectiv a day ago | parent [-] | | They allow you to write a kernel for Nvidia, or AMD, that can take full advantage of the Hardware of either one, then throw a compile time if-statement in there to switch which kernel to use based on the hardware available. So, you can support either vendor with as-good-vendor-library performance. That’s not lock-in to me at least. It’s not as good as the compiler being able to just magically produce optimized kernels for arbitrary hardware though, fully agree there. But it’s a big step forward from Cuda/HIP. |
| |
| ▲ | smilekzs 2 days ago | parent | prev [-] | | Not OP but I think this could be an instance of leaky abstraction at work. Most of the time you hand-write an accelerator kernel hoping to optimize for runtime performance. If the abstraction/compiler does not fully insulate you from micro-architectural details affecting performance in non-trivial ways (e.g. memory bank conflict as mentioned in the article) then you end up still having per-vendor implementations, or compile-time if-else blocks all over the place. This is less than ideal, but still arguably better than working with separate vendor APIs, or worse, completely separate toolchains. | | |
| ▲ | whimsicalism a day ago | parent [-] | | Yes, it looks like they have some sort of metaprogramming setup (nicer than C++) for doing this: https://www.modular.com/mojo | | |
| ▲ | totalperspectiv a day ago | parent [-] | | I can confirm, it’s quite nice. | | |
| ▲ | whimsicalism 21 hours ago | parent [-] | | jw: why do you use mojo here over triton or the new pythonic cute/cutlass? | | |
| ▲ | totalperspectiv an hour ago | parent [-] | | Because I was originally writing some very CPU intensive SIMD stuff, which Mojo is also fantastic for. Once I got that working and running nicely I decided to try getting the same algo running on GPU since, at the time, they had just open sourced the GPU parts of the stdlib. It was really easy to get going with. I have not used Triton/Cute/Cutlass though, so I can't compare against anything other than Cuda really. |
|
|
|
|
|