| ▲ | notfried 4 days ago |
| If CUDA isn't that strong of a moat/tie-in and Chinese tech companies can seemingly reasonably migrate to these chips, why hasn't AMD been able to compete more aggressively with nVidia on a US/global scale when they had a much longer head start? |
|
| ▲ | brookst 4 days ago | parent | next [-] |
| 1. AMD isn’t different enough. They’d be subject to the same export restrictions and political instability as Nvidia, so why would global companies switch to them? 2. CUDA has been a huge moat, but the incentives are incredibly strong for everybody except Nvidia to change that. The fact that it was an insurmountable moat five years ago in a $5B market does not mean it’s equally powerful in a $300B market. 3. AMD’s culture and core competencies are really not aligned to playing disruptor here. Nvidia is generally more agile and more experimental. It would have taken a serious pivot years ago for AMD to be the right company to compete. |
| |
| ▲ | FuriouslyAdrift 4 days ago | parent | next [-] | | AMD is HIGHLY successful in the GPU compute market. They have the Instinct line which actually outperforms most nVidia chips for less money. It's the CUDA software ecosystem they have not been able to overcome. AMD has had multiple ecosystem stalls but it does appear that ROCm is finally taking off which is open source and multi-vendor. AMD is unifying their GPU architectures (like nVidia) for the next gen to be able to subsidize development by gaming, etc., card sales (like nVidia). | | |
| ▲ | immibis 4 days ago | parent [-] | | Why doesn't AMD just write a CUDA translation layer? Yeah, it's a bit difficult to say "just", but they're a pretty big company. It's not like one guy doing it in a basement. Does Nvidia have patents on CUDA? They're probably invalid in China which explains why China can do this and AMD can't. | | |
| |
| ▲ | bjornsing 4 days ago | parent | prev | next [-] | | > CUDA has been a huge moat The CUDA moat is extremely exaggerated for deep learning, especially for inference. It’s simply not hard to do matrix multiplication and a few activation functions here and there. | | |
| ▲ | OkayPhysicist 4 days ago | parent | next [-] | | It regularly shocks me that AMD doesn't release their cards with at least enough CUDA reimplementation to run DL models. As you point out, AI applications use a tiny subset of the overall API, the courts have ruled that APIs can't be protected by copyright, and CUDA is NVIDIA's largest advantage. It seems like an easy win, so I assume there's some good reason. | | |
| ▲ | nerdsniper 4 days ago | parent | next [-] | | A very cynical take: AMD and Nvidia CEO’s are cousins and there’s more money to be made with one dominant monopoly than two competitive companies. And this income could be an existential difference-maker for Taiwan. | | |
| ▲ | kawaiikouhai 4 days ago | parent [-] | | bro, both are American CEOs. What is this racialized nonsense, have you seen Jensen Huang speak Mandarin? His mandarin is actually awful for someone who left Taiwan at 8. |
| |
| ▲ | tux1968 4 days ago | parent | prev | next [-] | | AMD can't even figure out how to release decent drivers for Linux in a timely fashion. It might not be the largest market, but would have at least given them a competitive advantage in reaching some developers. There is either something very incompetent in their software team, or there are business reasons intentionally restraining them. | |
| ▲ | wmf 4 days ago | parent | prev [-] | | They did; it's called HIP. |
| |
| ▲ | axoltl 4 days ago | parent | prev | next [-] | | From what I've been reading the inference workload tends to ebb and flow throughout the day with much lower loads overnight than at for example 10AM PT/1PM ET. I understand companies fill that gap with training (because an idle GPU costs the most). So for data centers, training is just as important as inference. | | |
| ▲ | bjornsing 3 days ago | parent [-] | | > So for data centers, training is just as important as inference. Sure, and I’m not saying buying Nvidia is a bad bet. It’s the most flexible and mature hardware out there, and the huge installed base also means you know future innovations will align with this hardware. But it’s not primarily a CUDA thing or even a software thing. The Nvidia moat is much broader than just CUDA. |
| |
| ▲ | sciencesama 4 days ago | parent | prev [-] | | The drivers are the most annoying issue ! Pytorch kind of like cuda so much it just works anything with roccm just sucks ! |
| |
| ▲ | danesparza 4 days ago | parent | prev [-] | | And it would be a big bet for AMD. They don't create and manufacture chips 'just in time' -- it takes man hours and MONEY to spin up a fab, not to mention marketing dollars. | | |
|
|
| ▲ | belval 4 days ago | parent | prev | next [-] |
| > If CUDA isn't that strong of a moat/tie-in and Chinese tech companies can seemingly reasonably migrate to these chips, why hasn't AMD been able to compete more aggressively with nVidia on a US/global scale when they had a much longer head start? It's all about investment. If you are a random company you don't want to sink millions in figuring out how to use AMD so you apply the tried an true "no one gets fired for buying Nvidia". If you are an authoritarian state with some level of control over domestic companies, that calculus does not exist. You can just ban Nvidia chips and force to learn how to use the new thing. By using the new thing an ecosystem gets built around it. It's the beauty of centralized controlled in the face of free markets and I don't doubt that it will pay-off for them. |
| |
| ▲ | PunchyHamster 4 days ago | parent | next [-] | | I think they'd be entirely fine just using NVIDIA, and most of the push came from US itself trying to ban export (or "export", as NVIDIA cards are put together in the china factories...). Also AMD really didn't invest enough in making their software experience as nice as NVIDIA. | | | |
| ▲ | ithkuil 4 days ago | parent | prev [-] | | Are there precedents where an authoritarian state outperformed the free market in technological innovation? Or would china be different because it's a mix of market and centralized rule? |
|
|
| ▲ | eunos 4 days ago | parent | prev | next [-] |
| Because Cuda moat in China is wrecked artificially by political reason rather than technical reason |
| |
|
| ▲ | buyucu 4 days ago | parent | prev | next [-] |
| I use AMD MI300s at work, and my experience is that for PyTorch at least there is no moat. The moat only exists in people's minds. Until 2022 or so AMD was not really investing into their software stack. Once they did, they caught up with Nvidia. |
| |
| ▲ | imtringued 4 days ago | parent [-] | | The only way the average person can access a MI300 is through the AMD developer cloud trial which gives you a mere 25 hours to test your software. Meanwhile NVidia hands out entire GPUs for free to research labs. If AMD really wanted to play in the same league as NVidia, they should have built their own cloud service and offered a full stack experience akin to Google with their TPUs, then they would be justified in ignoring the consumer market, but alas, most people run their software on their local hardware first. | | |
| ▲ | overfeed 4 days ago | parent | next [-] | | > The only way the average person can access a MI300 is through the AMD developer cloud trial which gives you a mere 25 hours to test your software HN has a blindspot where AMDs absence in the prosumer/SME space is interpreted as failing horribly. Yet AMDs instinct cards are selling very well at the top end of the market. If you were trying to disrupt a dominant player, would you try selling a million gadgets to a million people, or a million gadgets to 3-10 large organizations? | |
| ▲ | FuriouslyAdrift 4 days ago | parent | prev | next [-] | | AMD sells 100% of the chips they can produce and at a premium. It's chicken and the egg, here. They have to compete with nVidia for pre-buying fab capacity at TSMC and they are getting out bought. | | |
| ▲ | tonyhart7 4 days ago | parent [-] | | AMD also need to share that fab wafer capacity to processor division and third party client like (sony,valve,various hpc client) |
| |
| ▲ | Cheer2171 4 days ago | parent | prev [-] | | I can rent an MI300X for $2.69/hr right now on runpod. |
|
|
|
| ▲ | chii 4 days ago | parent | prev | next [-] |
| AMD probably don't have chinese state backing, presumably, where profit is less of a concern and they can do it unprofitably for many years (decades even) as long as the end outcome is dominance. |
|
| ▲ | shrubble 4 days ago | parent | prev | next [-] |
| Sadly, AMD and its precursor graphics company, ATI, have had garbage driver software since literally the mid-1990s. They have never had a focus on top notch software development. |
| |
|
| ▲ | baq 4 days ago | parent | prev | next [-] |
| CUDA isn't a moat... in China. The culture is much more NIH there. |
|
| ▲ | sampton 4 days ago | parent | prev | next [-] |
| Because Chinese government can tell their companies to adopt Chinese tech and they will do it. Short term pain for long term gain. |
|
| ▲ | 2OEH8eoCRo0 4 days ago | parent | prev | next [-] |
| It's interesting that CUDA is a moat because if AI really was as good as they claim then wouldn't the CUDA moat evaporate? |
| |
| ▲ | random3 4 days ago | parent [-] | | Exactly. The whole argument that software is a moat is at best a temporary illusion. The supply chain is the moat, software is not. |
|
|
| ▲ | dworks 4 days ago | parent | prev | next [-] |
| Most chipmakers in China are making or have made their new generation of products CUDA-compatible. |
|
| ▲ | belter 4 days ago | parent | prev | next [-] |
| Do you know how bad AMD is at doing drivers and Software in general? |
|
| ▲ | FrustratedMonky 4 days ago | parent | prev | next [-] |
| People are trying to break the moat. See, Mojo, a new language to compile to other chips.
https://www.modular.com/mojo |
| |
| ▲ | PunchyHamster 4 days ago | parent [-] | | I don't think "learn entirely new language" is all that appealing vs "just buy NVIDIA cards" | | |
| ▲ | FrustratedMonky 4 days ago | parent [-] | | This was in terms of breaking the Nvidia monopoly. Mojo is a variant of python. When looking at the difficulty of migrating from CUDA , learning python is pretty small barrier. Sure, you can keep buying nvidia, but that wasn't what was discussed. | | |
| ▲ | almostgotcaught 4 days ago | parent [-] | | > Mojo is a variant of python. Lol this is how I know no one that pushes mojo on hn has actually ever used mojo. | | |
| ▲ | FrustratedMonky 4 days ago | parent [-] | | Yes, over simplifying the concept. what is wrong with that? If I post a thesis on compilers would that really help clarify the subject? Read the link for details. Is Mojo attempting to offer a non-Cuda solution? Yes. Is it using Python as the language? Yes. Is there some complicated details there? Yes. Congratulations. | | |
| ▲ | almostgotcaught 4 days ago | parent [-] | | > Yes. Is it using Python as the language? You're completely wrong here. That's the "what's wrong with it". | | |
| ▲ | FrustratedMonky 3 days ago | parent [-] | | I think you are missing the nuance between the different aspects of using the Python Interpreter, and integrating new functions with Python. And compiling to a different target. Would you say Iron Python is Not Python, and quibble about it? Is there some Python purist movement I'm not aware of? Should every fork of Python be forced to take Python out of its name? To say Mojo doesn't use Python, when clearly that is a huge aim of the project, makes me think you are splitting hairs somewhere on some specific subject that is not clear by your one liners. Key aspects of Mojo in relation to Python: • Pythonic Syntax and Ecosystem Integration:
Mojo adopts Python's syntax, making it familiar to Python developers. It also fully integrates with the existing Python ecosystem, allowing access to popular AI and machine learning libraries. • Performance Focus:
Unlike interpreted Python, Mojo is a compiled language designed for high-performance execution on various hardware, including CPUs, GPUs, and other AI ASICs. It leverages MLIR (Multi-Level Intermediate Representation) for this purpose. • Systems Programming Features:
Mojo adds features common in systems languages, such as static typing, advanced memory safety (including a Rust-style ownership model), and the ability to write low-level code for hardware. • Compatibility and Interoperability:
While Mojo aims for high performance, it maintains compatibility with Python. You can call Python functions from Mojo code, although it requires a specific mechanism (e.g., within try-except blocks) due to differences in compilation and execution. • Development Status:
Mojo is a relatively new language and is still under active development. While it offers powerful features, it is not yet considered production-ready for all use cases and is continually evolving. | | |
| ▲ | almostgotcaught 3 days ago | parent [-] | | > I think you are missing the nuance between the different aspects of using the Python Interpreter, and integrating new functions with Python What if I told you I used to work at modular? What would you say then to this accusation that I'm "missing the nuance"? The rest of this is AI crap. | | |
| ▲ | FrustratedMonky 3 days ago | parent [-] | | I think then I'd have to go back to your original reply, and ask what your point was. What is it you are finding objectionable? These one liner "doh, your wrong", isn't clarifying. Do you really think Mojo is not based on Python? Or they are not trying to bypass Cuda? what is the problem? The rest might be marketing slop. But I'm not catching what your objection is. | | |
| ▲ | almostgotcaught 3 days ago | parent [-] | | > Do you really think Mojo is not based on Python? what do you mean "do you really". it's not. full stop. what part of this don't you understand? | | |
| ▲ | FrustratedMonky 2 days ago | parent [-] | | ? Are we talking about same thing? Mojo, the new language for programming GPU's without CUDA? The marketing and web site materials clearly show how they are using the Python interpreter and extending Python. They promote the use of Python everywhere. Like it is one of the most hyped points. I think you are trying to quibble with, does the new functions get compiled differently than the rest of Python? So technically, when the Mojo functions are in use, that is not Python at that point? Or maybe you are saying that they have extended Python so much you would like to not call it Python anymore? Like IronPython, maybe since that gets compiled to .NET, you disagree with it being called Python? Or maybe to use the IronPython example, if I'm calling a .NET function inside Python, you would like to make the fine distinction that that is NOT Python at that point? It should really be called .NET? Here is link to docs. You worked there. So maybe there is some hair splitting here that is not clear. https://docs.modular.com/mojo/manual/python/ Maybe it is just marketing hype that you disagree with. But right on the main page it says "Mojo is Python++". | | |
| ▲ | almostgotcaught 2 days ago | parent [-] | | > The marketing and web site materials clearly show how they are using the Python interpreter and extending Python. brother you have literally not a single clue what you're talking about. i invite you to go ask someone that currently works there about whether they're "using the Python interpreter and extending Python". | | |
| ▲ | FrustratedMonky a day ago | parent [-] | | From Docs, https://docs.modular.com/mojo/manual/python/ "This is 100% compatible because we use the CPython runtime without modification for full compatibility with existing Python libraries." At this point you need to either explain your objection, or just admit you are a troll. You haven't actually at any point in this exchange offered any actual argument beyond 'duh, you're wrong'. I'd be ok if you actually pointed to something like 'well technically, the mojo parts are compiled differently', or something. You say you worked there, but you're not even looking at their website. Creator Chris Lattner discussing why they used Python.
https://www.youtube.com/watch?v=JRcXUuQYR90 Start at minute 12. "Mojo is a very extended version of Python". | | |
| ▲ | almostgotcaught a day ago | parent [-] | | Are you even a programmer? Do you know what any of these words mean? > using the CPython interpreter as a dynamic library (shown as libpython.dylib in figure 1). They're embedding the python interpreter not extending it - just like everyone and their mother has been able to do for decades https://docs.python.org/3/extending/embedding.html I repeat: you have no idea what you're talking about so in reality you're the troll. | | |
| ▲ | FrustratedMonky a day ago | parent [-] | | You're really splitting some very thin pedantic hairs. You're problem isn't with me, you are quibbling with there own marketing materials. Go complain to marketing if they are using the words that you disagree with. Everything I've posted is directly from Mojo's website. You: "Well, technically they are embedding the interpreter, so all the surrounding code that looks exactly like python, and we promote as being compatible with python, and promote as extending python. My good sir, it is not really python. That is just a misunderstanding with marketing. Please ignore everything we are clearly making out as an important feature, totally wrong". They clearly promote that they are extending python. What is your problem with that? How is that wording causing you to seize up? I'm aware of what is technically happening. Where did I ever say anything that was not directly from them? Do I really need to write a thesis to satisfy every ocd programmer that wants to argue every definition. Were you let go because of an inability to think flexibly? Maybe too many arguments with co-workers over their word choice? Does you're brain tend to get single tracked on a subject, kind of blank out in a white flash when you disagree with someone? Actually, I'm kind of convinced you're just arguing to argue. This isn't about anything. | | |
| ▲ | almostgotcaught a day ago | parent [-] | | > so all the surrounding code that looks exactly like python, and we promote as being compatible with python bro are you really thick? there is zero mojo code that is runnable python; take a look at https://github.com/modular/modular/tree/main/mojo/stdlib/std... mojo has zero to do with python. zilch, zero, nada. what they are doing is simply embedding the python interpreter and running existing python code. literally everyone already does that, ie there are a million different projects that do this same thing in order to be able to interoperate with python (did you notice the heading at the top of the page you linked is *Python interoperability* not *Python compatibility*). > This isn't about anything. it's about your complete and utter ignorance in the face of a literal first hand account (plus plenty of contrary evidence). > Were you let go because of an inability to think flexibly? let go lololol. bro if you only knew what their turnover was like you would give up this silly worship of the company. | | |
| ▲ | FrustratedMonky a day ago | parent [-] | | Sorry. I get it now. You're bitter. To be clear, I'm not a fan boy. I don't really know much about Mojo. I've watched some videos, checked out their website, thought it was interesting idea. The parent post was about alternatives to CUDA. I posted a 6 word sentence summarizing how Mojo is trying to bypass CUDA, and using Python. -> And you flipped out, that it isn't Python. Really? I checked out your link, sure does look like Python. But that is the point, all of their promotional materials and every Chris Lattner video, all sales pitches, everywhere. Everywhere, is Python, Python, Python. Clearly they want everyone one to know how closely tied they are to Python. It is a clear goal of theirs. But. I see now the pedantic hair splitting. Mojo 'Looks Like Python', they use the same syntax. "Mojo aims to be a superset of Python, meaning it largely adopts Python's syntax while introducing new features". But you say, they aren't modifying or extending CPython so this is all false, it is no longer technically Python at all. And I guess I'm saying, Chill. They clearly are focused on Python all over the place, to say that it isn't, is really ludicrous. You're down a rabbit whole of debating what is a name, what is a language. When is Python not Python? How different does it have to be, to not be? |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| ▲ | buckle8017 4 days ago | parent | prev [-] |
| CUDA is a legal moat. A reimplantation would run into copyright issues. No such problem in China. |