| ▲ | Mark Cuban: OpenAI Will Never Return the $1T It's Investing [video](youtube.com) |
| 34 points by operatingthetan 7 hours ago | 55 comments |
| |
|
| ▲ | aurareturn 7 hours ago | parent | next [-] |
| He's right, there is a race. It's going to be a natural monopoly or duopoly because the cost to train the next SOTA model is always increasing. I can see that there are only 3 companies competing for the duopoly or monopoly realistically: OpenAI, Anthropic, and Google. Everyone else has fallen behind. The flywheel of generate more revenue, get more data, get more compute train a better model might already be too great to overcome for anyone else. I don't understand why he thinks OpenAI can't be one of the duopolies or become the monopoly. OpenAI's models are always the first or second best overall - usually the first. They are also leading in the consumer market by a wide margin. They also made a strategic decision that is paying off which was committing to more compute early on while Anthropic is hammered by the lack of compute. PS. They've raised ~$200b total, not $1 trillion. |
| |
| ▲ | preommr 7 hours ago | parent | next [-] | | > I can see that there are only 3 companies competing for the duopoly or monopoly realistically: OpenAI, Anthropic, and Google. I could see people saying this in 2022, but now? No chance. Chinese models keep demonstrating that SOTA can be approximated for a fraction of the cost. The innovation out of these companies keep showing diminishing returns, with a greater emphasis on the tooling and application layer. Having the right workflow with the right data is more important than having the right model. We could freeze AI now, and I'd bet good money that the current state of things is good enough to - not be first - but competitive for the next few years. Even if we do end up with a oligopoly situaiton, it'll be less like Microsoft in the 90s and more like Microsoft now where they just give out windows for free, have support for WSL and the focus is on cloud services rather than their OS. | | |
| ▲ | jitler an hour ago | parent [-] | | > Chinese models keep demonstrating that SOTA can be approximated for a fraction of the cost. Wow, sounds like a threat to nation security. Those Silicon Valley companies shills start donating to select donation campaigns with hope to ban Chinese LLM models. Can you imagine? Your children being exposed to the propaganda that these LLMs will be inevitably tainted to spew? |
| |
| ▲ | atwrk 7 hours ago | parent | prev | next [-] | | How can this become a monopoly/duopoly? There is no moat, the Chinese providers will continue to hunt the market leader at 10% of the price, there is no network effect (OpenAI's Sora was a play in that direction and failed). I'm constantly amazed how this AGI/monopoly narrative can be kept up so long in the West, it just doesn't make sense (unless the state creates said monopoly by forbidding competition). | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | There is clearly a moat - or Claude Code wouldn't be generating over $10b in ARR every single month. | | |
| ▲ | piker 6 hours ago | parent | next [-] | | That's not what "moat" means. Claude Code has a castle. A "moat" is meant to protect the castle from invaders. It would be things like high switching costs, proprietary formats, network effects, etc. that aren't there. In other comments people mention the "flywheel" of data and money feeding training, but there's a view that at some point the baseline open-weight models are "good enough" that the money will dry up. | | |
| ▲ | aurareturn 6 hours ago | parent [-] | | baseline open-weight models are "good enough" that the money will dry up.
I take a different view. Open-weight models aren't going to be free forever. At some point, open weight model labs will also have to make money.My guess is that the industry will consolidate. The winners will absorb the losers and focus on generating revenue. Therefore, there will be a growing gap between open and free models and the proprietary SOTA models. | | |
| ▲ | vidarh 6 hours ago | parent | next [-] | | What the open-weight labs have shown is that you can go from nothing to competing with SOTA models at a tiny fraction of the cost for the SOTA models. If there is consolidation by absorption, that derisks attempting to challenge the SOTA providers, and so they will keep facing attempts. | |
| ▲ | atwrk 2 hours ago | parent | prev | next [-] | | But all the open-weight players make money right now. Google (Gemma), Alibaba (Qwen), z.ai (GLM), minimax.io (Minimax) - they all have hosted offers and sometimes closed-weight max versions. And the fact that the open-weight as well as cheaper tier 2 offers exist both place a ceiling on the prices the SOTA companies can demand - and as far as we know current prices don't even fully pay for inference alone already, at least not for OpenAI. | | |
| ▲ | aurareturn an hour ago | parent [-] | | Are they profitable on their LLM training? It's not clear. Z.ai is definitely not profitable. | | |
| ▲ | atwrk 37 minutes ago | parent [-] | | To my knowledge none of the players is even profitable on inference, though Google probably is, considering the continuous release of papers around kv cache optimizations, mtp etc. |
|
| |
| ▲ | thepasch 6 hours ago | parent | prev [-] | | > Open-weight models aren't going to be free forever. The ones that are already released are, and they're already very good for most purposes and can be fine-tuned indefinitely, includin months or years down the line when processes have been optimized and things aren't as compute-heavy as they are now. |
|
| |
| ▲ | aswegs8 6 hours ago | parent | prev [-] | | That's definitely a moat. Being able to generate ARR every month. | | |
| ▲ | atwrk 3 hours ago | parent | next [-] | | No, a moat would be a feature preventing the competition from competing successfully. Classically things like patents, for example, or process knowledge like ASML currently has for EUV lithography, or the network effects of a social media platform, or access to data no one else has access to. ARR is not a moat at all, because the revenue of OpenAI is not preventing Alibaba, z.ai and so on from generating revenue as well. The opposite is true, actually, because the first mover prepared the market (e.g. user education about application possibilities, creating the willingness to pay for the service in the first place) for the second movers. People here write about switching from Claude to Codex mid-workday - that is the absolute opposite of a moat. The only companies that have a chance of not losing everything in this market are those with established non-AI revenue streams, like Google or Alibaba, or those focusing on profitability in niche markets instead of participating in the SOTA death race. | |
| ▲ | rowanG077 4 hours ago | parent | prev [-] | | No it's not. There is a even a wikipedia page for it: https://en.wikipedia.org/wiki/Economic_moat A moat is protection so you can keep your ARR up or increase it over years. Arguably only google have a moat with their TPUs. NVidia has a moat. But the others who just train some models on NVidia hardware have no moat. |
|
|
| |
| ▲ | JumpCrisscross 3 hours ago | parent | prev | next [-] | | > can see that there are only 3 companies competing for the duopoly or monopoly realistically: OpenAI, Anthropic, and Google Amazon and Microsoft have a seat at the table by virtue of their cloud businesses. | |
| ▲ | dgellow 6 hours ago | parent | prev | next [-] | | I think the performance of models is only one aspect. You have to take in account the cash flow, how much spending commitment the different actors have, debt, etc. OpenAI has taken some very risky commitments, of they don’t get the revenue to cover their expenses in the next few years their situation will be pretty bad | | |
| ▲ | aurareturn an hour ago | parent [-] | | I wouldn't worry about their commitments. Their growth is insane if it's similar to Anthropic's revenue numbers. Their IPO might also raise a few hundred billion. |
| |
| ▲ | orwin 6 hours ago | parent | prev | next [-] | | Yeah, no, i disagree. Frontier models were almost untouchable 6 month ago, but now i can get 90% of Opus 4.5 with any chineese model, or even with Mistral. The only thing i'm missing is the chain of thought that help me understand the "how" and "why" when AI fails at its task. For the "general purpose" AI, it's even worse, any free model i can run on my Intel Arc (yes, sorry, it was discounted an very cheap) i get like 80% of a frontier model, at virtually no cost, and i suppose Deepseek/Mistral are like 95% there. | |
| ▲ | libertine 7 hours ago | parent | prev | next [-] | | Out of those 3, only Google seems to be in the position to reach that kind of profit levels due to distribution and advertising. Claude is kicking ass in the niche of coding and processes. 1 trillion is a lot of money for something that's not differentiated and protected in a massive market. Does it look like OpenAI has that in place? Cuban thinks they don't, and won't. | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | I wrote about how I think OpenAI is going to kill it in advertisements here: https://news.ycombinator.com/item?id=46087109 Claude is kicking ass in coding but it seems like Codex is catching up fast. Claude Code's PR has taken a hit recently due to the lack of compute forcing Anthropic to dumb down the models. Codex has been gaining momentum. Chip manufacturing aren't really differentiated either - it didn't stop TSMC from becoming the monopoly for high end chip nodes, capturing 90%+ of the advanced chip market. The reason they have is because Rock's Law makes it too expensive to build the next node unless you've generated enough revenue from the current node. I don't see why it isn't the same for SOTA models. | | |
| ▲ | rwmj 6 hours ago | parent | next [-] | | Chip manufacturing is insanely hard, it requires know-how, that's the moat. It's not money, otherwise the EU and China would have leading edge fabs. Machine learning has no real moat. There's no network effect, it's not hard (you can just throw money at the problem). It's not data, because we have an existence proof that general intelligence can be trained by a few humans and a shelf full of books. The compute to do it is generally available. As soon as one organization releases open weights, everyone can use it immediately, even on modest local hardware. | | |
| ▲ | aurareturn 6 hours ago | parent [-] | | Chip manufacturing is insanely hard, it requires know-how, that's the moat. It's not money, otherwise the EU and China would have leading edge fabs.
So is SOTA LLM training. There is OpenAI and Anthropic and there is everyone else. Gemini has fallen behind a bit as well.There were tens of chip manufacturers in the 80s and 90s. Most of them have been absorbed or went backrupt. Just like SOTA LLM training now. Today, TSMC is a monopoly for SOTA nodes. The only reason Intel can survive is due to geopoltics. |
| |
| ▲ | machiaweliczny 4 hours ago | parent | prev | next [-] | | They would compete with Meta then which already does something like this but has mature AD tech | |
| ▲ | libertine 6 hours ago | parent | prev [-] | | I understand your argument, but I think you might be overestimating the intent of the users when they're using chatgpt. The ones killing on ads are Google, Meta, and Amazon. I just don't see how ChatGPT will gobble those market shares - ads are increasingly tied to sales attribution, and it would require a complete shift of the market for ChatGPT to take over the role of those 3 players. People will still try to look for content around the products they buy, or will shop for prices, or will look for feedback from other users of the product. |
|
| |
| ▲ | jqpabc123 7 hours ago | parent | prev [-] | | https://www.reuters.com/business/openai-makes-five-year-plan... | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | This is a 5 year pledge - likely based on hitting revenue goals and not just using investor money. |
|
|
|
| ▲ | Jare 7 hours ago | parent | prev | next [-] |
| > Fewer people applying for patents, because the minute you apply for the patent, it's available to everybody, which means every model can train on it We know LLM companies have, for lack of a better word, "sidestepped" the copyright on millions of works with their "transformative fair use" arguments. Are LLMs also a way to sidestep patents? |
| |
| ▲ | pjc50 6 hours ago | parent | next [-] | | LLMs are accelerants. They enable people to do patent and copyright infringement at a much larger scale. As we know from previous examples, if you break the law enough as a company eventually they have to let you keep doing it. | | | |
| ▲ | dgellow 6 hours ago | parent | prev | next [-] | | I don’t see how? You can train on something pending patent, but what are the benefits? If it gets patented you open yourself to get sued, I don’t see how AI works around that, the idea itself is still patented? I think I’m missing something for the argument to make sense. Or is the idea that if too many people use your patented idea you won’t be able to enforce it? That sounds risky to me | | |
| ▲ | ozlikethewizard 5 hours ago | parent [-] | | Because no AI company has been sued yet. Without more specific legislation there is no reason for AI trainers to not pilfer everything. | | |
| ▲ | JumpCrisscross 4 hours ago | parent [-] | | Patents are public. Ingesting and innovating on them is the intended use. If you use an LLM to then make and market something that infringes on a patent, that isn’t the LLM doing any infringing, it’s you. | | |
| ▲ | stvltvs 2 hours ago | parent [-] | | But are you even aware that you're infringing a patent? Is the LLM going to helpfully flag when it responds based on a patent? | | |
| ▲ | JumpCrisscross 40 minutes ago | parent [-] | | > are you even aware that you're infringing a patent? Plenty of folks first learn they’re infringing when they get a demand letter. Unless you ask it, I’m not sure it’s on the LLM to search for prior art and patent conflicts. |
|
|
|
| |
| ▲ | 6stringmerc 4 hours ago | parent | prev [-] | | What a funny perspective - they didn’t side-step copyright, they blatantly infringed without financial consequence. The interesting “upside” is none of the generated works are protected by copyright. So it’s a bizarre conundrum which goes to show the complete disconnection between the concepts of the original intent - to protect authors and creators - with the warped capitalist mechanics of “rights holders” like Disney buying political influence for regulatory market capture. Sugar coating the discussion is for children and dishonest ethical rationalization, in my view. |
|
|
| ▲ | SilverBirch 6 hours ago | parent | prev | next [-] |
| I think it's unquestionably right that these companies can't all win, and those that don't win are going to burn a lot of money for nothing. However there's kind of two directions this can go: Compute gets cheaper, in which case there's no monopoly it'll be easy for many companies to make good models and there won't be pricing power on serving a good model. The other case is compute gets cheaper but we keep using more and more of it, so it does likely become winner take all. The first scenario is good for the economy but likely bad for the returns on these AI stocks. The second is maybe bad for the economy and maybe not even good for the winner. Take Google or Meta: Today Google makes a shit-tonne of money and to make that money they need to run some servers. The servers are extremely cheap relatively to the revenue they make running the business. This makes them a very attractive stock - the core of why SAAS looks great. Now let's assume the monopoly path. Google can win. I think they likely will win. But now they're going to spending... how many hundreds of billions constantly training new models? The cost of providing the service suddenly isn't small relative revenue they're getting. So even for them it looks awful for their valuation. |
| |
| ▲ | JumpCrisscross 5 hours ago | parent [-] | | > these companies I think the conclusion the market is rapidly and correctly reaching is we aren’t in an AI bubble, we’re in an OpenAI bubble. Google, Amazon and Anthropic look likely to see ROI on their capital investments because they’ve made them halfway reluctantly. Microsoft is up in the air. Not sure what Meta is doing. And with the benefit of hindsight, OpenAI used capex as a marketing strategy with investors (while Sam Altman materially lied about his compensation and somehow looped Paul Graham and Jessica Livingston, founder of The Information, into his racket). | | |
| ▲ | aurareturn 4 hours ago | parent [-] | | Why would you say we're in an OpenAI bubble if Anthropic is valued more than OpenAI now? | | |
| ▲ | JumpCrisscross 4 hours ago | parent [-] | | > Why would you say we're in an OpenAI bubble if Anthropic is valued more than OpenAI now? One, it’s not. They’re roughly even. The folks quoting crypto tokens don’t know what they’re talking about. Two, Anthropic has more revenue and higher-quality growth. Three, OpenAI is levered in a way Anthropic and the tech giants are not. Nobody is immune from being overvalued. But what separates a bubble from normal overvaluation is leverage—the consequence of the valuation deflating isn’t just losses, it’s total loss because of debt or debt-like obligations. OpenAI has racked those up in its datacenter drive. Anthropic and the tech giants have been more disciplined. If OpenAI’s revenues dip, its valuation not only crashes, its commitments to various datacenter projects start strangling it. | | |
| ▲ | aurareturn 4 hours ago | parent [-] | | Ok, let's say they are even. It doesn't make sense to me that you think OpenAI is in a bubble but Anthropic isn't. OpenAI, just a few weeks ago, claimed they actually have more revenue than Anthropic based the same accounting rules. Since then, Codex seems to be roaring because OpenAI has more compute capacity. OpenAI also has the majority market on consumers, which they're just beginning to monetize. How is OpenAI levered? They bought more compute earlier and are now reaping the benefits while Anthropic's growth is slowed by the lack of compute. You call Anthropic disciplined - I call it a mistake to not have bet on more compute. | | |
| ▲ | JumpCrisscross 3 hours ago | parent [-] | | > How is OpenAI levered? They bought more compute earlier and are now reaping the benefits while Anthropic's growth is slowed by the lack of compute These are orthogonal points. OpenAI is levered because it has signed commitments to compute. Those are obligations it has to pay regardless of whether it hits revenue targets. A revenue slowdown hurts Anthropic. It could kill OpenAI. Leverage makes good deals into great deals. If OpenAI hits its revenue targets, levering will have been smart. There is a genuine debate around whether OpenAI’s leverage was a good bet. I think it isn’t. You think it is. That doesn’t change that OpenAI is levered, and that this makes it existentially sensitive to demand variation on the downside (and better exposed on the upside). To conclude, Anthropic and OpenAI could both be over or undervalued. But only OpenAI can truly be in a bubble. | | |
| ▲ | kasey_junk 2 hours ago | parent | next [-] | | Leverage might also be a requirement for the product if you don’t already have compute. Anthropics lack of compute is strangling it before our eyes. I agree that OpenAI is more levered but a bubble can be caused by over exuberance in equity as well. And Anthropic has that in spades. | | |
| ▲ | JumpCrisscross an hour ago | parent [-] | | > Leverage might also be a requirement for the product if you don’t already have compute. Anthropics lack of compute is strangling it before our eyes Two sides of the same coin. A leveraged farmer buys tractors up front and can sow more land. That pays in a boom. The one who bootstrapped is “strangled” by being unable to go after land until they have cash. If demand falters, however, the second farmer—worst case—has idle tractors. The first owes payments he can no longer make. Bringing it back to AI, Anthropic seems to show you don’t need massive leverage to at least compete. They did it with equity. It isn’t bootstrapping, like the example above, but it’s closer to that than the full tilt OpenAI has gone on. Knowing what we know now, Anthropic came in underlevered. They should have borrowed a bit. Given OpenAI is missing sales targets [1], it seems they are probably overlevered. They have similar revenues and valuations. Put together, that makes OpenAI more bubble-esque. > a bubble can be caused by over exuberance in equity as well. And Anthropic has that in spades Agree. But putting aside idiots who may have levered their Anthropic equity, overvalued equity means you get to fight another day after a crash. [1] https://www.wsj.com/tech/ai/openai-misses-key-revenue-user-t... |
| |
| ▲ | aurareturn 2 hours ago | parent | prev [-] | | So your whole argument is that OpenAI is in a bubble because their bet on more compute won't payoff, but it's paying off now. But Anthropic is not in a bubble, despite being valued the same as OpenAI, because they were more careful with compute, which they're paying a heavy price for now having to dumb down Claude Code. So what do you think OpenAI should be valued at if they're in a bubble now? | | |
| ▲ | JumpCrisscross an hour ago | parent [-] | | > your whole argument is that OpenAI is in a bubble because their bet on more compute won't payoff, but it's paying off now I’m saying OpenAI are levered. If they’re levered and overvalued, they’re a bubble. If they aren’t overvalued, which is to say if they can beat their 2.3x target, i.e. $60+ billion in ARR, they played it savvily. > But Anthropic is not in a bubble, despite being valued the same as OpenAI, because they were more careful with compute Anthropic were more careful with debt and debt-like obligations. > what do you think OpenAI should be valued at if they're in a bubble now? You’re still conflating orthogonal points. I think AI should be valued around a growth-adjusted revenue multiple [1] of 4 to 7x. (For context, tech was 2-4x 2015 to 2017, 4-7x 2018-2019, 6.7x in 2021, 3x in 2023, and has now settled back to around 5x for most companies.) Using $30bn ARR for Anthropic (300% growth) and $25bn for OpenAI (130% growth), both based on the companies’ own projections—Anthropic’s 1,400% growth YoY makes historical figures a bit silly—we get $360 to $630bn for Anthropic and $130bn to $230bn for OpenAI. I’d put a wide error bar on those figures. Which means I can’t reject their current valuations. Which is why I’m not arguing about who is and isn’t overvalued. The critical observation is Anthropic at $360bn is bruised but survives. OpenAI, even at $230bn and potentially much higher, is basically bankrupt. That is the difference between being overvalued and bubbled. [1] PEG, but E is R | | |
| ▲ | aurareturn 7 minutes ago | parent [-] | | Anthropic is going 1,400% YoY but OpenAI is not anywhere close. Their models are close in capabilities and now Anthropic is choking on the lack of compute. Claude Code doesn't have any secret sauce that Codex doesn't. I fully expect OpenAI to grow faster the rest of the year due to higher compute capacity. |
|
|
|
|
|
|
|
|
|
| ▲ | grunder_advice 6 hours ago | parent | prev | next [-] |
| IMHO, Google, Meta and Microsoft are best positioned to be the last ones standing because they have alternative cashflows.
The danger with OpenAI and Anthropic is that they might end up being the Sun Microsystems of the AI era. It will only takes them a couple of misteps along the wrong technology path for them to be out of the game. |
|
| ▲ | Yizahi 3 hours ago | parent | prev | next [-] |
| OpenAI maybe won't, but someone else will. Maybe US government, maybe some fund etc. Too big to fail. |
|
| ▲ | jqpabc123 7 hours ago | parent | prev | next [-] |
| In my experience, Cuban is generally pretty good at stripping away the stupidity and BS. |
| |
| ▲ | rwmj 7 hours ago | parent | next [-] | | He's stating the obvious, but perhaps it needed to be said. | |
| ▲ | aurareturn 7 hours ago | parent | prev [-] | | Sometimes he is the stupidity and BS. | | |
| ▲ | consumer451 6 hours ago | parent | next [-] | | Genuine question: could you please share some notable examples? | |
| ▲ | orwin 6 hours ago | parent | prev [-] | | Yeah, he falls in the categories "is able to underline the issues" + "Explain why you are beeing bullshitted", but also in the "Snakeoil salesman", and that made him hard to trust. Basically i listen to him to talk basketball and that's basically it. |
|
|
|
| ▲ | feverzsj 6 hours ago | parent | prev [-] |
| Why should they return your money if it's a Ponzi scheme? |