|
| ▲ | caconym_ 2 days ago | parent | next [-] |
| Releasing anything as "GPT-6" which doesn't provide a generational leap in performance would be a PR nightmare for them, especially after the underwhelming release of GPT-5. I don't think it really matters what's under the hood. People expect model "versions" to be indexed on performance. |
|
| ▲ | ACCount37 2 days ago | parent | prev | next [-] |
| Not necessarily. GPT-4.5 was a new pretrain on top of a sizeable raw model scale bump, and only got 0.5 - because the gains from reasoning training in o-series overshadowed GPT-4.5's natural advantage over GPT-4. OpenAI might have learned not to overhype. They already shipped GPT-5 - which was only an incremental upgrade over o3, and was received poorly, with this being a part of the reason why. |
| |
| ▲ | diego_sandoval 2 days ago | parent [-] | | I jumped straight from 4o (free user) into GPT-5 (paid user). It was a generational leap if there ever has been one. Much bigger than 3.5 to 4. | | |
| ▲ | ACCount37 a day ago | parent | next [-] | | Yes, if OpenAI released GPT-5 after GPT-4o, then it would have been seen as a proper generational leap. But o3 existing and being good at what it does? Took the wind out of GPT-5's sails. | |
| ▲ | kadushka a day ago | parent | prev [-] | | What kind of improvements do you expect when going from 5 straight to 6? |
|
|
|
| ▲ | hannesfur 2 days ago | parent | prev | next [-] |
| Maybe they felt the increase in capability is not worth of a bigger version bump. Additionally pre-training isn't as important as it used to be. Most of the advances we see now probably come from the RL stage. |
|
| ▲ | femiagbabiaka 2 days ago | parent | prev | next [-] |
| Not if they didn't feel that it delivered customer value no? It's about under promising and over delivering, in every instance |
|
| ▲ | jumploops 2 days ago | parent | prev | next [-] |
| It’s possible they’re using some new architecture to get more up-to-date data, but I think that’d be even more of a headline. My hunch is that this is the same 5.1 post-training on a new pretrained base. Likely rushed out the door faster than they initially expected/planned. |
|
| ▲ | OrangeMusic a day ago | parent | prev | next [-] |
| Yeah because OpenAI has been great at naming their models so far? ;) |
|
| ▲ | boc 2 days ago | parent | prev | next [-] |
| Maybe the rumors about failed training runs weren't wrong... |
|
| ▲ | redwood 2 days ago | parent | prev [-] |
| Not if it underwhelms |