| ▲ | xmorse 13 hours ago |
| Writing this in Mojo would have been so much easier |
|
| ▲ | 3abiton 13 hours ago | parent | next [-] |
| It's barely gaining adoption though. The lack of buzz is a chicken and egg issue for Mojo. I fiddled shortly with it (mainly to get it working some of my pythong scripts), and it was suprisingly easy. It'll shoot up one day for sure if Latner doesn't give up early on it. |
| |
| ▲ | ronsor 11 hours ago | parent | next [-] | | Isn't the compiler still closed source? I and many other ML devs have no interest in a closed-source compiler. We have enough proprietary things from NVIDIA. | | |
| ▲ | 0x696C6961 8 hours ago | parent | next [-] | | Yeah, the mojo pitch is so good, but I don't think anyone has an appetite for the potential fuckery that comes with a closed source platform. | |
| ▲ | 3abiton 8 hours ago | parent | prev [-] | | Yes, but Latner said multiple time it's closed until it matures (he apparently did this with llvm and swift too). So not unusal. His open source target is end of 2026. In all fairness, I have 0 doubts that he would deliver. | | |
| ▲ | pjmlp 2 hours ago | parent [-] | | Given Swift for Tensorflow, lets see how this one goes. | | |
| ▲ | jacobgorm 19 minutes ago | parent | next [-] | | Who would anyone want to pair a subpar language with a subpar ML framework? | | |
| ▲ | pjmlp 6 minutes ago | parent [-] | | That is the thing, what lessons were learnt from it, and how will Mojo tackle them. |
| |
| ▲ | saagarjha an hour ago | parent | prev [-] | | That one did get open sourced but nobody ended up wanting to use it |
|
|
| |
| ▲ | boredatoms 9 hours ago | parent | prev [-] | | I feel like its in AMD/Intel/G’s interest to pile a load of effort into (an open source) mojo |
|
|
| ▲ | ipsum2 7 hours ago | parent | prev | next [-] |
| Mojo is not open source and would not get close to the performance of cuTile. I'm tired of people shilling things they don't understand. |
| |
| ▲ | almostgotcaught 7 hours ago | parent [-] | | it's all over this thread (and every single other hn thread about GPU/ML compilers) - people quoting random buzzword/clickbait takes. |
|
|
| ▲ | llmslave2 11 hours ago | parent | prev | next [-] |
| I really want Mojo to take off. Maybe in a few years. The lack of an stdlib holds it back more than they think, and since their focus is narrow atm it's not useful for the vast majority of work. |
|
| ▲ | bigyabai 13 hours ago | parent | prev | next [-] |
| Use-cases like this are why Mojo isn't used in production, ever. What does Nvidia gain from switching to a proprietary frontend for a compiler backend they're already using? It's a legal headache. Second-rate libraries like OpenCL had industry buy-in because they were open. They went through standards committees and cooperated with the rest of the industry (even Nvidia) to hear-out everyone's needs. Lattner gave up on appealing to that crowd the moment he told Khronos to pound sand. Nobody should be wondering why Apple or Nvidia won't touch Mojo with a thirty-nine and a half foot pole. |
| |
| ▲ | xmorse 11 hours ago | parent | next [-] | | Kernels now written in Mojo were all in hand written in MLIR like in this repo. They made a full language because that's not scalable, a sane language is totally worth it. Nvidia will probably end up buying them in a few years. | | |
| ▲ | pjmlp 2 hours ago | parent | next [-] | | NVidia is perfectly fine with C++ and Python JIT. CUDA Tile was exactly designed to give parity to Python in writing CUDA kernels, acknowledging the relevance of Python, while offering a path researchers don't need to mess with C++. It was announced at this years GTC. NVidia has no reason to use Mojo. | |
| ▲ | bigyabai 10 hours ago | parent | prev | next [-] | | I don't think Nvidia would acquire Mojo when the Triton compiler is open source, optimized for Nvidia hardware and considered a industry standard. | |
| ▲ | saagarjha 8 hours ago | parent | prev [-] | | Nobody is writing MLIR by hand, what are you on about? There are so many MLIR frontends |
| |
| ▲ | oedemis 8 hours ago | parent | prev | next [-] | | how mojo with max optimize the process? | |
| ▲ | itsthecourier 11 hours ago | parent | prev [-] | | what about a fourty feet pole? would it be viable? |
|
|
| ▲ | pjmlp 12 hours ago | parent | prev [-] |
| It would help if they were not so much macOS and Linux focused. Julia, Python GPU JITs work great on Windows, and many people only get Windows systems as default at work. |
| |
| ▲ | saagarjha 8 hours ago | parent | next [-] | | Approximately nobody writing high performance code for AI training is using Windows. Why should they target it? | | |
| ▲ | pjmlp 3 hours ago | parent [-] | | As desktop, and sometimes that is the only thing available. When is the Year of NPUs on Linux? | | |
| ▲ | saagarjha an hour ago | parent [-] | | This targets Blackwell GPUs so I’m not sure what you are talking about | | |
|
| |
| ▲ | bigyabai 9 hours ago | parent | prev [-] | | I've commissioned a board of MENSA members to devise a workaround for this issue; they've identified two potential solutions. 1) Install Linux 2) Summon Chris Lattner to play you a sad song on the world's smallest violin in honor of the Windows devs that refuse to install WSL. | | |
| ▲ | pjmlp 3 hours ago | parent [-] | | I go with customers keep using CUDA with Python and Julia, ignore Chris Latter's company exists, while Mojo repeats Swift for Tensorflow history. What about that outcome? |
|
|