| ▲ | rubicon33 4 hours ago | |
But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps. | ||
| ▲ | gallerdude 4 hours ago | parent | next [-] | |
We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better. | ||
| ▲ | minimaxir 3 hours ago | parent | prev | next [-] | |
Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone. | ||
| ▲ | wahnfrieden 4 hours ago | parent | prev [-] | |
GPT 5.3 (/Codex) was a huge leap over 5.2 for coding | ||