| ▲ | lukan 3 hours ago |
| How can there be a "winner takes it all" situation with AI? OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind. Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow. |
|
| ▲ | conradkay 3 hours ago | parent | next [-] |
| Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top. So if I'm Google I'd want a decent chunk of at least one of them. |
| |
| ▲ | svnt 2 hours ago | parent [-] | | What is the argument for a duopoly when Kimi and Deepseek models are only months behind? It’s a commodity in the making. | | |
| ▲ | fc417fc802 2 hours ago | parent | next [-] | | That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI). | | |
| ▲ | zarzavat 10 minutes ago | parent [-] | | Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you? If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models. I guess you can sell it to the Department of War. |
| |
| ▲ | conradkay 2 hours ago | parent | prev [-] | | They're months behind now and have very low market share, so as long as they stay months behind the duopoly/triopoly can hold. |
|
|
|
| ▲ | nine_k 3 hours ago | parent | prev | next [-] |
| Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players. |
|
| ▲ | calebkaiser 2 hours ago | parent | prev | next [-] |
| 2 years? 2 years ago, gpt-4o was OpenAI's flagship model. The gap is real, but much smaller than 2 years. |
|
| ▲ | jedberg 2 hours ago | parent | prev | next [-] |
| The first to AGI, or a close approximation, is the winner. That’s what the investors in Anthropic and OpenAI are betting on. I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle. |
| |
| ▲ | lukan 2 hours ago | parent | next [-] | | "The first to AGI, or a close approximation, is the winner. " But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow? | |
| ▲ | devmor 2 hours ago | parent | prev | next [-] | | Are these investors high? Or just insane? | | |
| ▲ | fc417fc802 2 hours ago | parent | next [-] | | Neither. It's the most severe FOMO in history. The best case scenario is equivalent to attempting to pick future winners just prior to the industrial revolution really kicking off. Except this time around the technological timelines appear to be severely compressed and everyone is fully aware of what's at stake. And again, that's the best case scenario. | |
| ▲ | saintfire 2 hours ago | parent | prev [-] | | Its just market euphoria. |
| |
| ▲ | svnt 2 hours ago | parent | prev [-] | | This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore. It is just cargo cult financing at this point. |
|
|
| ▲ | ngruhn 3 hours ago | parent | prev | next [-] |
| I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore. |
| |
| ▲ | hattmall 2 hours ago | parent | next [-] | | That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable. | | |
| ▲ | fc417fc802 2 hours ago | parent [-] | | If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time. |
| |
| ▲ | lukan 2 hours ago | parent | prev | next [-] | | But if the second AI that can self improve comes up? Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach. | |
| ▲ | techpression 41 minutes ago | parent | prev [-] | | If that happens catching up will be meaningless, everything we know and care about will change.
You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog.
There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself. |
|
|
| ▲ | teaearlgraycold 3 hours ago | parent | prev [-] |
| Not even 2 years behind. |