| ▲ | mvkel 14 hours ago | ||||||||||||||||||||||
The fact is nobody has any idea what OpenAI's cash burn is. Measuring how much they're raising is not an adequate proxy. For all we know, they could be accumulating capital to weather an AI winter. It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real. | |||||||||||||||||||||||
| ▲ | reissbaker 13 hours ago | parent | next [-] | ||||||||||||||||||||||
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training. No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | super256 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1] It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome. However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high. If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too. [1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s... | |||||||||||||||||||||||
| ▲ | Imustaskforhelp 14 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Didn't they create Sora and other models and literally burned so much money with their AI video app which they wanted to make a social media but what ended up happening was that they burned billions of dollars. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | computerphage 14 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Why do you think they have not trained a new model since 4o? You think the GPT-5 release is /just/ routing to differently sized 4o models? | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | nl 10 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real. This isn't really accurate. Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did. Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense. Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run. | |||||||||||||||||||||||
| ▲ | orbital-decay 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
>It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4) At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | ajross 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> The fact is nobody has any idea what OpenAI's cash burn is. Their investors surely do (absent outrageous fraud). > For all we know, they could be accumulating capital to weather an AI winter. If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on. [1] Again c.f. fraud | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | stevenjgarner 9 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Well we do know their consumption of energy is not insignificant and comes at great cost. | |||||||||||||||||||||||
| ▲ | yojo 14 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
They have not successfully trained a new model since 4o. That doesn’t mean they haven’t burned a pile of cash trying. I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | ta9000 14 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
How are they updating the data then? Wouldn’t the cutoff date be getting further away from today? | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | micromacrofoot 14 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
they're paying million dollar salaries to engineers and building data centers, it's not a huge mystery where their expenses are | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | slashdave 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> they could be accumulating capital to weather an AI winter Doubtful. This would be the very antithesis of the Silicon Valley way. | |||||||||||||||||||||||
| ▲ | wahnfrieden 14 hours ago | parent | prev [-] | ||||||||||||||||||||||
wasn't 4.5 new | |||||||||||||||||||||||
| |||||||||||||||||||||||