| ▲ | iot_devs 6 hours ago |
| Someone can explain to me what's the expectations for these AI labs? I mostly see their products as commodity at this point, with strong open source contenders. Eventually it will become hard to justify the premium on these models. |
|
| ▲ | ForrestN 5 hours ago | parent | next [-] |
| I think this "Mythos" situation, whether real or hype, points to the endgame here. Eventually, when you have a model powerful enough to have big consequences in the world, you stop worrying about selling it to consumers and start either a) using it to rule the world or b) watch as it gets nationalized. If you have a machine powerful enough to automate everything, why sell access to it when you could just...be all things to all people? Use the god machine yourself to take over more and more of the economy? |
| |
| ▲ | lokar 5 hours ago | parent | next [-] | | I disagree. The point of the mythos hype is to get regulation to cut off competitors. | | |
| ▲ | rhubarbtree an hour ago | parent | next [-] | | I disagree. The point of the mythos hype is to bump the IPO. | |
| ▲ | inciampati 5 hours ago | parent | prev [-] | | Didn't OAI just try that 18 months ago? | | |
| ▲ | cmrdporcupine 2 hours ago | parent [-] | | They'll all keep on trying it until it either totally fails or succeeds. As people keep pointing out, the moat is insufficient to ward off international or domestic competitors. So the answer is to try to seek regulatory capture. |
|
| |
| ▲ | JumpCrisscross 4 hours ago | parent | prev | next [-] | | > why sell access to it when you could just...be all things to all people? Because, as OpenAI is learning [1], you still need to sell it. The tech giants have a seat at the table is mostly because they have distribution down. [1] https://www.cnbc.com/2026/02/23/open-ai-consulting-accenture... | |
| ▲ | SpicyLemonZest 5 hours ago | parent | prev [-] | | Sometimes selling services is just the best business model. Intuit has accounting software powerful enough to have big consequences in the world, yet they mostly sell it to accountants rather than doing the accounting themselves. |
|
|
| ▲ | loveparade 6 hours ago | parent | prev | next [-] |
| I give it one to two more years before open source models have fully caught up. Products are commodities and models are commodities too. GPUs cores are still hard to get for inference at scale right now. They need a platform with lock in but unsure what that would look like and why it wouldn't be based on open source models. |
| |
| ▲ | alex_duf 5 hours ago | parent | next [-] | | What does "fully caught up" mean in the context of an ever evolving technology?
I think I'm in support of open weight models (though there are safety implications), but these things aren't cheap to train and run. This fact alone gives no incentive for leading labs to release cutting edge open weight models. Why spend the money then give the product for free? Now if "fully caught up" means today's level of intelligence is available for free in two years, by then that level of intelligence means very little | | |
| ▲ | vorticalbox 5 hours ago | parent | next [-] | | It’s never free your shifting costs from paying a company for their api use vs the power costs of running it locally. | |
| ▲ | stavros 5 hours ago | parent | prev [-] | | Yeah I don't understand it, it's a marathon with three companies perpetually a minute ahead, and people keep saying "I expect the stragglers to catch up". The only thing I can see them meaning is what you said, "in a minute the stragglers will be where the leaders were a minute ago", which, yeah, sure. | | |
| ▲ | lelanthran 11 minutes ago | parent | next [-] | | That's fine. I can afford to wait a minute if it means I pay $10/m instead of $5k/m. | |
| ▲ | ReliantGuyZ 3 hours ago | parent | prev | next [-] | | By my estimation, there is a point where these models are "good enough" for the vast vast majority of all appropriate tasks, after which point further investment by the major labs will have diminishing returns. While they might stay ahead by some measure, the open models will be good enough too, and I assume significantly cheaper like they are now. Or AGI hits and this theory collapses, but that's feeling less likely every day. | |
| ▲ | patrickmcnamara 5 hours ago | parent | prev | next [-] | | It's not a marathon, or any race. There is no a finish line. It doesn't matter that much that someone is a minute ahead. | |
| ▲ | mrbombastic 5 hours ago | parent | prev [-] | | It makes perfect sense if you think things cannot improve indefinitely | | |
| ▲ | PunchyHamster an hour ago | parent | next [-] | | Also, there is a good enough point where improvements for a given use case are on heavy diminishing returns | |
| ▲ | inciampati 5 hours ago | parent | prev [-] | | They do approximate any function... within the range they're trained on. And that range is human limited, at least today. |
|
|
| |
| ▲ | xdennis an hour ago | parent | prev | next [-] | | Why do people have such faith in "open source" models? There's nothing "open source" about them. No individuals have the ability to train such modules. They are just released by companies to commoditize the models of the competition. If Mythos is the endgame, companies won't release open-weight equivalents, and no private individuals have the capital to train such models. | | |
| ▲ | quikoa 43 minutes ago | parent [-] | | The open models cannot be taken away. Anyone with the right hardware can host these. Unlike the API/subscription services where you can be banned from, may have drastic price increases or reduction of their limits. |
| |
| ▲ | empath75 3 hours ago | parent | prev [-] | | What is the transition state where people start using open source models that you imagine actually happening? Play out a scenario. An open source model is released that is capable as Mythos. Presumably it requires hardware big enough that running it at home is unfeasible. You are imagining that individuals can run it in the cloud themselves for cheaper than api tokens would cost? Or even small companies? And that Anthropic and OpenAI won't be able to cut costs deeper than their competitors while staying profitable? If it is fundamentally a commodity, that means "running it yourself" also isn't really interesting as a proposition. Many of the world's biggest companies sell commodities. It's a great business to be in if you can sell them cheaper than anyone else. The value add here isn't the model, it is "having a bunch of compute and using it more efficiently than anyone else". |
|
|
| ▲ | stephencoyner an hour ago | parent | prev | next [-] |
| Coding agents are getting deployed wall to wall in most if not all of the major tech companies. Many have no token limits - spend as much as you want as long as you have a good story to tell. Companies bake their workflows into these tools. Internal processes start to be written up around specific tools. Once something works, it gets pushed out at scale for all to copy. Anthropic hit $30B in revenue and this is just the start of coding being deployed at scale. Hard to look past these numbers at this point |
| |
| ▲ | nitwit005 28 minutes ago | parent [-] | | The company I used to work for now used to pay Oracle a lot of money. It pays $0 now, because there are free alternatives. It did take a while, but that transformation has happened across the industry. |
|
|
| ▲ | 0xbadcafebee 4 hours ago | parent | prev | next [-] |
| They are a commodity - but also cyber weapons. Warmongering nations are now in an arms race to have the best AI so they can have superior cyber weapons, intelligence capabilities. But they don't want to pick just one lab, they want multiple AI defense contractors to compete over contracts. As the US sold weapons to many nations in the past, so will China, the US, France, etc sell AI cyber capability to other nations. Likely every modern nation will need some datacenter to host a cluster of the preferred vendor, as nobody's going to trust the US or China with their security. |
|
| ▲ | muyuu 4 hours ago | parent | prev | next [-] |
| the prospect that any of those big players will be able to pay back 100s of billions with profit on top sounds fantastical to me it will be interesting to see it unfold |
|
| ▲ | empath75 3 hours ago | parent | prev | next [-] |
| > I mostly see their products as commodity at this point, with strong open source contenders. I have seen this argument made a lot, but llm serving being a commodity makes it _better_ for them not worse. If it's a commodity, then you are entirely competing on price, and the players that will win on price will be the largest ones, because they can find efficiencies that smaller competitors won't have. It's actually the small LLM companies that are in trouble if LLM serving commoditizes. They will need to distinguish themselves on features, because they can't compete on price. And even there the big labs will have an advantage. |
|
| ▲ | hmmmmmmmmmmmmmm 6 hours ago | parent | prev | next [-] |
| None of them have any moat, OpenAI already lost the lead [1] and no one is "winning". It is just a race to the bottom as they burn through GPUs that won't even last that long. [1] https://x.com/kenshii_ai/status/2046111873909891151/photo/2 |
| |
| ▲ | Tepix 6 hours ago | parent | next [-] | | GPUs are lasting longer than foreseen, in fact old GPUs are more valuable now (making more money!) than they were three years ago when they were new. Tokens will continue to increase in price until the supply meets the demand. That's going to take a while. | | | |
| ▲ | kitsune1 4 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | cma 6 hours ago | parent | prev | next [-] |
| Everyone using Claude code on a personal subscription is default opted in to getting their data trained on. Private troves of data like are seen to potentially end up in a winner take all scenario. More data, better models, attracts more users, results in more exclusive data (what Altman calls the data flywheel). |
| |
| ▲ | spenvo 6 hours ago | parent | next [-] | | PSA: this is true (the defaults), but there's a "Help improve Claude" setting that you can disable here https://claude.ai/settings/data-privacy-controls It's my understanding that, as long as this is off, Anthropic does not train on Claude Code conversations, inputs/outputs -- if anyone knows otherwise, please tell and provide a link if possible. | | |
| ▲ | devsda 5 hours ago | parent [-] | | Anthropic is no MS, but strange undocumented bugs can sneak in sometimes. |
| |
| ▲ | johnbarron 5 hours ago | parent | prev [-] | | >> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use. [1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..." [1] - https://aws.amazon.com/blogs/security/securing-generative-ai... | | |
| ▲ | cma 3 hours ago | parent [-] | | I'm talking about the subsidized subscription plans. The data isn't the sole point of them, they also are about bringing in users that will encourage the product use in companies and ultimately drive more profitable API adoption within their orgs, and just general diffuse mindshare doing the same. You can still opt out (except with Google's offering which disables lots of features if you opt out of training). |
|
|
|
| ▲ | johnbarron 5 hours ago | parent | prev | next [-] |
| Please, some of us are long NVIDIA...let us cope in peace. :-) Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity. So you will get no productivity increase from the AI bubble. Yes, you read that correctly. The test is simple, if raw brainpower were the bottleneck, you could 10x any company by hiring 200 PhDs. In practice you get 200 brilliant people writing unread memos, refactoring things that worked, and forming a committee to rename the committee. Smart has always been cheaper and more abundant than the discourse pretends. Every real productivity revolution came from somewhere else like energy (steam, electricity), capital stock (machines that do the physical work), or coordination (railroads, shipping containers, the assembly line, the internet). None of these raised the average IQ of the workforce, they changed what a given worker could move, reach, or coordinate with. Solow old line basically still holds. The output per worker grows when you give the worker better tools and infrastructure, not better neurons. Meanwhile the actual bottlenecks in a modern firm are regulatory approval, legacy systems, procurement cycles, customer adoption, internal politics, and physical supply chains that don't care how clever your email was. A smart brains intern at every desk produces more artifacts, not more throughput, and in a lot of organizations, more artifacts is actively negative ROI. Jevons does not save you either, cheaper cognition mostly means more slide decks, not more GDP. So the setup is that models are commoditizing on one side, and on the other side a product whose core value add (more intelligence, faster) is aimed at a constraint that was never really binding. This of course a rough combo for a trillion dollar capex supercycle. Fun for the trade, while it lasts, but there is no thesis. Just dont tell CNBC and short NVDA on time ,-) |
| |
| ▲ | brianjlogan 5 hours ago | parent | next [-] | | Besides to say that your competitor can turn around and hire the same team of PHDs at the same rate that you can. Compare and contrast PHD's on leaderboards and have access in seconds with a new API key or model selector. Granted LLM's are not even PHDs. What a weird time we live in... | |
| ▲ | paganel 3 hours ago | parent | prev | next [-] | | > Jevons does not save you either, There's also a very strong Trurl and Klapaucius [1] component to this AI craziness, as in I remember a passage in Lem's The Cyberiad where either Trurl or Klapaucius were "discussing" with an intelligent/AGI robot and asking it for stuff-to-know/information, at which point said AGI robot started literally inundating them with information, paper on top of paper on top of paper of information. At that point it doesn't even matter if that information is correct or smart or whatever, because by that point the very amount of said information has changed everything into a futile endeavour. [1] https://en.wikipedia.org/wiki/The_Cyberiad | |
| ▲ | CamperBob2 4 hours ago | parent | prev [-] | | Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity. Exactly. We don't use the intelligence we already have! That seems to be the real problem with the "AGI" concept. Given such a capability, we'll just nerf it, gatekeep it, and/or bias it. There's no reason to think we'll actually use it to benefit humanity as a whole. It will be shaped into an instrument to enforce our prejudices. |
|
|
| ▲ | nl 6 hours ago | parent | prev | next [-] |
| $30B ARR says otherwise. |
| |
| ▲ | Sayrus 6 hours ago | parent | next [-] | | ARR says nothing about the ability of these companies to retain customers once subsidies stop. | |
| ▲ | 101008 6 hours ago | parent | prev [-] | | revenue is not profit | | |
| ▲ | lokar 5 hours ago | parent | next [-] | | And EBITA is not GAAP | |
| ▲ | trgn 5 hours ago | parent | prev [-] | | in no world is 30B ARR a bad thing | | |
| ▲ | sensanaty 5 hours ago | parent [-] | | If they're spending 60B anually then that is bad. Obviously none of us know what their real burn rate is, but revenue is an irrelevant number if you don't have the full picture. |
|
|
|
|
| ▲ | engineer_22 5 hours ago | parent | prev [-] |
| >I mostly see their products as commodity at this point, with strong open source contenders. > Eventually it will become hard to justify the premium on these models. On the contrary, the model is the moat. The model represents embodied capital expenditure in the form of training. Training is not free, and it is not a commodity, it is heavily influence by curation. Eventually the ever-increasing training expense will reduce the competition to 2-3 participants running cutting edge inference. Nobody else will be able to afford the chips, watts, and warehouse. It's a physics problem - not a lack of will. If you're a retail user, and a lower-tier model is suitable for your work, you'll have commodity LLM's to help you. Deprecated models running on tired silicon. Corporate surveillance and ad-injection. But if you're working on high-stakes problems in real time, you're going to want the best money can buy, so you'll concentrate your spend on the cutting-edge products, open API's, a suite of performance monitoring tools and on-the-fly engineering support. And since the cutting edge is highly sought after, it's a seller's market. The cutting edge products buoyed by institutional spend will pull away from the pack. Their performance will far exceed what you're using, because your work isn't important. Hockey stick curve. Haves and Have-Nots. The economic reality is predetermined by today's physical constraints - paradigm shifting breakthroughs in quantum computing and superconductors could change the calculus but, like atomic fusion power, don't count on it being soon. |