Remix.run Logo
downrightmike 20 hours ago

Cuda is 20 years old and it shows. Time for a new language that fixes the 20 years of rough edges. The Guy (Lattner) who made LLVM is working on this: https://www.modular.com/mojo

Good podcast on him: https://newsletter.pragmaticengineer.com/p/from-swift-to-moj...

bigyabai 19 hours ago | parent | next [-]

What I gather from this comment is that you haven't written CUDA code in a while, maybe ever.

Mojo looked promising initially. The more details we got though, the more it became apparent that they weren't interested in actually competing with Nvidia. Mojo doesn't replace the majority of what CUDA does, it doesn't have any translation or interoperability with CUDA programs. It uses a proprietary compiler with a single implementation. They're not working in conjunction with any serious standardization orgs, they're reliant on C/C++ FFI for huge amounts of code and as far as I'm aware there's no SemVer of compute capability like CUDA offers. The more popular Mojo gets, the more entrenched Nvidia (and likely CUDA) will become. We need something more like OpenGL with mutual commitment from OEMs.

Lattner is an awesome dude, but Mojo is such a trend-chasing clusterfuck that I don't know what anyone sees in it. I'm worried that Apple's "fuck the dev experience" attitude rubbed off on Chris in the long run, and made him callous towards appeals to openness and industry-wide consortiums.

CalmDream 16 hours ago | parent [-]

Most of the stuff you pointed out is addressed in a series of blog posts by Lattner : https://www.modular.com/democratizing-ai-compute

bigyabai 15 hours ago | parent [-]

Many of those posts are opinionated and even provably wrong. The very first one about Deepseek's "recent breakthrough" was never proven or replicated in practice. He's drawing premature conclusions, ones that especially look silly now that we know Deepseek evaded US sanctions to import Nvidia Blackwell chips.

I can't claim to know more about GPU compilers than Lattner - but in this specific instance, I think Mojo fucked itself and is at the mercy of hardware vendors that don't care about it. CUDA, by comparison, is having zero expense spared in it's development at every layer of the stack. There is no comparison with Mojo, the project is doomed if they intend any real comparison with CUDA.

CalmDream 14 hours ago | parent [-]

what is provably wrong ?

htrp 20 hours ago | parent | prev [-]

mojo been in the works for 3+ years now.... not sure the language survives beyond the vc funding modular has.