Remix.run Logo
gallerdude 5 hours ago

I always grew up hearing “competition is good for the consumer.” But I never really internalized how good fierce battles for market share are. The amount of competition in a space is directly proportional to how good the results are for consumers.

hibikir 2 hours ago | parent | next [-]

Competition is great, but it's so much better when it is all about shaving costs. I am afraid that what we are seeing here is an arms race with no moat: Something that will behave a lot like a Vickrey auction. The competitors all lose money in the investment, and since a winner takes all, and it never makes sense to stop the marginal investment when you think you have a chance to win, ultimately more resources are spent than the value ever created.

This might not be what we are facing here, but seeing how little moat anyone on AI has, I just can't discount the risk. And then instead of the consumers of today getting a great deal, we zoom out and see that 5x was spent developing the tech than it needed to, and that's not all that great economically as a whole. It's not as if, say, the weights from a 3 year old model are just useful capital to be reused later, like, say, when in the dot com boom we ended up with way too much fiber that was needed, but that could be bought and turned on profitably later.

skybrian an hour ago | parent [-]

Three-year-old models aren't useful because there are (1) cheaper models that are roughly equivalent, and (2) better models.

If Sonnet 4.6 is actually "good enough" in some respects, maybe the models will just get cheaper along one branch, while they get better on a different branch.

tomjakubowski 27 minutes ago | parent [-]

It's funny, it sure seems like software projects in general follow the Lindy effect: considering their age and mindshare, I can safely predict gcc, emacs, SQLite, and Python will still be running somewhere ten, 20, 30 years from now. Indeed, people will choose to use certain software specifically because it's been around forever; it's tried and true.

But LLMs, and AI-related tooling, seem to really buck that trend: they're obsoleted almost as soon as they're released.

gordonhart 5 hours ago | parent | prev | next [-]

Remember when GPT-2 was “too dangerous to release” in 2019? That could have still been the state in 2026 if they didn’t YOLO it and ship ChatGPT to kick off this whole race.

WarmWash 4 hours ago | parent | next [-]

I was just thinking earlier today how in an alternate universe, probably not too far removed from our own, Google has a monopoly on transformers and we are all stuck with a single GPT-3.5 level model, and Google has a GPT-4o model behind the scenes that it is terrified to release (but using heavily internally).

vineyardmike 2 hours ago | parent | next [-]

This was basically almost real.

Before ChatGPT was even released, Google had an internal-only chat tuned LLM. It went "viral" because some of the testers thought it was sentient and it caused a whole media circus. This is partially why Google was so ill equipped to even start competing - they had fresh wounds of a crazy media circus.

My pet theory though is that this news is what inspired OpenAI to chat-tune GPT-3, which was a pretty cool text generator model, but not a chat model. So it may have been a necessary step to get chat-llms out of Mountain View and into the real world.

https://www.scientificamerican.com/article/google-engineer-c...

https://www.theguardian.com/technology/2022/jul/23/google-fi...

brador 2 hours ago | parent | prev | next [-]

Now think about how often the patent system has stifled and stalled and delayed advancement for decades per innovation at a time.

Where would we be if patents never existed?

sarchertech 2 hours ago | parent | next [-]

Who knows? If we’d never moved on from trade secrets to patents, we might be a hundred years behind.

cma 2 hours ago | parent | prev [-]

To be fair, Google has a patent on the transformer architecture. Their page rank patent monopoly probably helped fund the R&D.

dboreham 2 hours ago | parent [-]

They also had a patent on map/reduce.

nsxwolf 2 hours ago | parent | prev [-]

It would have been nice for me to be able to work a few more years and be able to retire

dimitrios1 2 hours ago | parent [-]

will your retirement be enjoyable if everyone else around you is struggling?

minimaxir 4 hours ago | parent | prev | next [-]

They didn't YOLO ChatGPT. There were more than a few iterations of GPT-3 over a few years which were actually overmoderated, then they released a research preview named ChatGPT (that was barely functional compared to modern standards) that got traction outside the tech community because it was free, and so the pivot ensued.

nikcub 4 hours ago | parent | prev | next [-]

I also remember when the playstation 2 required an export control license because it's 1GFLOP of compute was considered dangerous

that was also brilliant marketing

gildenFish an hour ago | parent | prev | next [-]

In 2019 the technology was new and there was no 'counter' at that time. The average persons was not thinking about the presence and prevalence of ai in the way we do now.

It was kinda like a having muskets against indigenous tribes in the 14-1500s vs a machine gun against a modern city today. The machine gun is objectively better but has not kept up pace with the increase in defensive capability of a modern city with a modern police force.

jefftk 5 hours ago | parent | prev | next [-]

That's rewriting history. What they said at the time:

> Nearly a year ago we wrote in the OpenAI Charter : “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. -- https://openai.com/index/better-language-models/

Then over the next few months they released increasingly large models, with the full model public in November 2019 https://openai.com/index/gpt-2-1-5b-release/ , well before ChatGPT.

gordonhart 5 hours ago | parent | next [-]

> Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

I wouldn't call it rewriting history to say they initially considered GPT-2 too dangerous to be released. If they'd applied this approach to subsequent models rather than making them available via ChatGPT and an API, it's conceivable that LLMs would be 3-5 years behind where they currently are in the development cycle.

IshKebab 5 hours ago | parent | prev [-]

They said:

> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code (opens in a new window).

"Too dangerous to release" is accurate. There's no rewriting of history.

tecleandor 4 hours ago | parent [-]

Well, and it's being used to generate deceptive, biased, or abusive language at scale. But they're not concerned anymore.

girvo 2 hours ago | parent [-]

They've decided that the money they'll make is too important, who cares about externalities...

It's quite depressing.

ModernMech 2 hours ago | parent | prev [-]

Yeah, and Jurassic Park wouldn't have been a movie if they decided against breeding the dinosaurs.

maest 4 hours ago | parent | prev | next [-]

Unfortunately, people naively assume all markets behave like this, even when the market, in reality, is not set up for full competition (due to monopolies, monopsonies, informational asymmetry, etc).

XorNot 2 hours ago | parent [-]

And AI is currently killing a bunch of markets intentionally: the RAM deal for OpenAI wouldn't have gone through the way it did if it wasn't done in secret with anti-competitive restrictions.

There's a world of difference between what's happening and RAM prices if OAI and others were just bidding for produced modules as they released.

raincole 4 hours ago | parent | prev | next [-]

The real interesting part is how often you see people on HN deny this. People have been saying the token cost will 10x, or AI companies are intentionally making their models worse to trick you to consume more tokens. As if making a better model isn't not the most cutting-throat competition (probably the most competitive market in the human history) right now.

IgorPartola 3 hours ago | parent | next [-]

I mean enshittification has not begun quite yet. Everyone is still raising capital so current investors can pass the bag to the next set. Soon as the money runs out monetization will overtake valuation as top priority. Then suddenly when you ask any of these models “how do I make chocolate chip cookies?” you will get something like:

> You will need one cup King Arthur All Purpose white flour, one large brown Eggland’s Best egg (a good source of Omega-3 and healthy cholesterol), one cup of water (be sure to use your Pyrex brand measuring cup), half a cup of Toll House Milk Chocolate Chips…

> Combine the sugar and egg in your 3 quart KitchenAid Mixer and mix until…

All of this will contain links and AdSense looking ads. For $200/month they will limit it to in-house ads about their $500/month model.

gnatolf 2 hours ago | parent [-]

While this is funny, the actual race already started in how companies can nudge LLM results towards their products. We can't be saved from enshittification, I fear.

raddan 2 hours ago | parent [-]

I am excited about a future where I am constantly reminded to like and subscribe my LLM’s output.

abelitoo an hour ago | parent [-]

I'm concerned for a future where adults stop realizing they themselves sound like LLMs because the majority of their interaction/reading is output from LLMs. Decades of corporations being the ones molding the very language we use is going to have an interesting effect.

Gigachad 2 hours ago | parent | prev [-]

Only until the music stops. Racing to give away the most stuff for free can only last so long. Eventually you run out of other people’s money.

patapong 2 hours ago | parent [-]

Uber managed to make it work for quite a while

raddan 2 hours ago | parent [-]

They did, but Uber is no longer cheap [1]. Is the parent’s point that it can’t last forever? For Uber it lasted long enough to drive most of the competition away.

[1] https://www.theguardian.com/technology/2025/jun/25/second-st...

fwip an hour ago | parent [-]

Uber's in a business where you have some amount of network effect - you need both drivers available using your app, as well as customers hailing rides. Without a sufficient quantity of either, you can't really turn a profit.

LLM providers don't, really. As far as I can tell, their moat is the ability to train a model, and possessing the hardware to run it. Also, open-weight models provide a floor for model training. I think their big bet is that gathering user-data from interactions with the LLM will be so valuable that it results in substantially-better models, but I'm not sure that's the case.

gmerc 5 hours ago | parent | prev | next [-]

Until 2 remain, then it's extraction time.

raffkede 4 hours ago | parent [-]

Or self host the oss models on the second hand GPU and RAM that's left when the big labs implode

baq 2 hours ago | parent | next [-]

China will stop releasing open weights models as soon as they get within striking range; c.f. seedance 2.0.

osti 42 minutes ago | parent [-]

ByteDance never really open sourced their models though. But I agree, they will only open source when it doesn't really matter.

2 hours ago | parent | prev [-]
[deleted]
yogurt0640 3 hours ago | parent | prev | next [-]

I grew up with every service enshitified in the end. Whoever has more money wins the race and gets richer, that's free market for ya.

poszlem 4 hours ago | parent | prev [-]

This is a bit of a tangent, but it highlights exactly what people miss when talking about China taking over our industries. Right now, China has about 140 different car brands, roughly 100 of which are domestic. Compare that to Europe, where we have about 50 brands competing, or the US, which is essentially a walled garden with fewer than 40.

That level of internal fierce competition is a massive reason why they are beating us so badly on cost-effectiveness and innovation.

tartoran 3 hours ago | parent | next [-]

It's the low cost of labor in addition to lack of environmental regulation that made China a success story. I'm sure the competition helps too but it's not main driver

amunozo 2 hours ago | parent [-]

That happens in most of the world. Why China, then?

sarchertech 2 hours ago | parent [-]

Because they have a billion and a half people and they were willing to be the western world’s factory.

Gigachad 2 hours ago | parent | prev [-]

Consequence is they are now facing an issue of “cancer villages” where the soil and water are unbelievably poisonous in many places.

8note 2 hours ago | parent [-]

which isnt particularly unique. its comparable to something like aome subset of americans getting black lung, or the health problems from the train explosion in east palestine.

it took a lot of work for environmentalists to get some regulation into the US, canda, and the EU. china will get to that eventually

Gigachad 2 hours ago | parent [-]

It isn’t. I just bring it up to state there is a very good reason the rest of the world doesn’t just drop their regulations. In the future I imagine China may give up many of these industries and move to cleaner ones, letting someone else take the toxic manufacturing.