| ▲ | netdevphoenix 3 days ago |
| This looks quite concerning imo. We all know this is a bubble. When the transformer implosion happens, you can be sure that OpenAI will be ground zero. All these investors feeding OpenAI and all these adjacent companies exposing themselves to OpenAI will suffer huge losses. Everyone is chasing growth so hard that they are making questionable choices regarding returns from a far future that may never come. And let's be clear, the future that is going to pay this off is a future where this tech or a direct successor to this tech brings about a level of general learning skills and autonomy that should be pretty close to a third revolution. Anything else is massive loves for all of these companies. |
|
| ▲ | treis 3 days ago | parent | next [-] |
| Nah this is a repeat of Google's early days. They built storage at such a scale that it was hard for anyone else to compete in anything that required storage like email. OpenAI is doing the same with compute. They're going to have more compute than everyone else combined. It will give them the scale and warchest to drive everyone else out. Every AI company is going to end up being a wrapper around them. And OpenAI will slowly take that value too either via acquisition or cloning successful products. |
| |
| ▲ | mdasen 3 days ago | parent | next [-] | | But is OpenAI building that compute or are they renting it? OpenAI and Anthropic are signing large deals with Google and Amazon for compute resources, but ultimately it means that Google and Amazon will own a ton of compute. Is OpenAI paying Amazon's cap ex just so Amazon can invest and end up owning what OpenAI needs over the long term? For those paying Google, are they giving Google the money Google needs to further invest in their TPUs giving them a huge advantage? | | |
| ▲ | treis 3 days ago | parent [-] | | Practically, it doesn't matter like it didn't matter for Google that storage got many orders of magnitude cheaper. By the time training a novel LLM and serving it to a billion users is trivial in the way that providing 1GB of email storage is today there will be other moats. They'll have decades of user history and a monitization framework that will be hard to overcome. Google is a viable competitor here. Everyone else is missing part of the puzzle. They theoretically could compete but they're behind with no obvious way of catching up. Amazon specifically is in a position similar to where they were with mobile. They put out a competing phone but with no clear advantage it flopped. They could put out their own LLM but they're late. They'd have to put out a product that is better enough to overcome consumer inertia. They have no real edge or advantage over OpenAI/Google to make that happen. Theoretically they could back a competitor like Anthropic but what's the point? They look like an also ran these days and ultimately who wins doesn't affect Amazon's core businesses. | | |
| ▲ | bespokedevelopr 3 days ago | parent | next [-] | | FB seems to have figured it out finally and their stock took a huge hit for the investment of infra. Also, despite being behind in sota models and huge human capital investments for research, I believe they are benefiting greatly from oai and the likes. Every image/video/text post on a meta app is essentially subsidized by oai/gemini/anthropic as they are all losing money on inference. Meta is getting more engagement and ad sales through these subsidized genai image content posts. Long term they need to catch up and training/inference costs need to drop enough such that each genai post costs less than net profit on the ads but they’re in a great position to bridge the gap. The end of all of this is ad sales. Google and Meta are still the leaders of this. OpenAI needs a social engagement platform or it is only going to take a slice of Google. | | |
| ▲ | netdevphoenix 3 days ago | parent [-] | | > Meta is getting more engagement and ad sales through these subsidized genai image content posts. Do you have any sources backing this? As in "more engagement and ad sales" relative to what they would get with no genai content |
| |
| ▲ | esafak 3 days ago | parent | prev [-] | | How is Anthropic an also-ran when they lead the enterprise market? | | |
| ▲ | dybber 3 days ago | parent [-] | | Do they? Doesn’t big corporations just buy CoPilot from Microsoft where they already have a license for Office, Teams, GitHub, Visual Studio, Azure etc.? | | |
|
|
| |
| ▲ | JimDabell 3 days ago | parent | prev | next [-] | | > OpenAI is doing the same with compute. No, it’s Amazon that’s doing this. OpenAI is paying Amazon for the compute services, but it’s Amazon that’s building the capacity. | |
| ▲ | pphysch 3 days ago | parent | prev | next [-] | | Pretty sure this "compute is the new oil" thesis fell flat when OAI failed to deliver on GPT-5 hype, and all the disappointments since. It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there. | | |
| ▲ | XorNot 3 days ago | parent | next [-] | | It's absolutely no longer about the data. We produce millions of new humans a year who wind up better at reasoning then these models but don't need to read the entire contents of the Internet to do it. A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint). | | |
| ▲ | pphysch 3 days ago | parent [-] | | AI having societally useful impact is 100% about the data and overall training process (and robotics...), of which raw compute is a relatively trivial and fungible part. No amount of reddit posts and H200s will result in a model that can cure cancer or drive high-throughput waste filtering or precision agriculture. |
| |
| ▲ | kingstnap 3 days ago | parent | prev [-] | | I think GPT 5 is pretty good. My use case is vscode copilot and the GPT 5 Codex model and the 5 mini model are a lot better than 4.1. o4 mini was pretty good too. Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy. | | |
| ▲ | pphysch 3 days ago | parent [-] | | Sure, GPT-5 is pretty good. So are a dozen other models. It's nowhere near the "scary good" proto-AGI that Altman was fundraising on prior to its inevitable release. | | |
| ▲ | Libidinalecon 2 days ago | parent [-] | | Even more so where is the model that is beating GPT-5? This level that fell flat should have been easy to jump over if the scaling narratives were holding. |
|
|
| |
| ▲ | kilroy123 3 days ago | parent | prev | next [-] | | Google, too, has a lot of compute. Not to mention the chips to power the compute. | | |
| ▲ | pityJuke 3 days ago | parent [-] | | And they own the compute, as opposed to renting some of it. And they have the engineers to utilise that compute. |
| |
| ▲ | gizajob 3 days ago | parent | prev | next [-] | | If only everyone in the world had compute in their pockets or on their desk… | |
| ▲ | Keyframe 3 days ago | parent | prev | next [-] | | Every AI company is going to end up being a wrapper around them. the race is for sure on: https://menlovc.com/perspective/2025-mid-year-llm-market-upd... | |
| ▲ | sipjca 3 days ago | parent | prev [-] | | seems like a flawed assumption when the cost of tokens -> 0 |
|
|
| ▲ | lm28469 3 days ago | parent | prev | next [-] |
| Like in politics, all they care about is getting out before shtf and pass the bag to the next sucker while making $$$ in the meantime |
|
| ▲ | lizknope 3 days ago | parent | prev | next [-] |
| Are we at the Pets.com stage of the bubble yet? I started working in 1997 at the height of the dot com bubble. I thought it would go on forever but the second half of 2000 and 2001 was rough. I know a lot of people designing AI accelerator chips. Everyone over 45 thinks we are in an AI bubble. It's the younger people that think growth is infinite. I told them to diversify from their company stock but we'll see if they have listened after the bubble pops |
|
| ▲ | empath75 3 days ago | parent | prev | next [-] |
| You are stating a lot of things as fact that aren't really supported. We don't know this is a bubble, we don't know that there will be a transformer implosion, whatever that means, we don't know that OpenAI would ground zero if this is a bubble and it pops, etc.. |
| |
| ▲ | indigodaddy 3 days ago | parent [-] | | No one ever knows before these things happen. These predictions are obviously always conjecture, they can’t be stated as fact, ever— at best you can give some supporting evidence often based on similar prior art |
|
|
| ▲ | indigodaddy 3 days ago | parent | prev [-] |
| loves is typo for losses I assume? |
| |