| ▲ | pphysch 3 days ago | ||||||||||||||||
Pretty sure this "compute is the new oil" thesis fell flat when OAI failed to deliver on GPT-5 hype, and all the disappointments since. It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there. | |||||||||||||||||
| ▲ | XorNot 3 days ago | parent | next [-] | ||||||||||||||||
It's absolutely no longer about the data. We produce millions of new humans a year who wind up better at reasoning then these models but don't need to read the entire contents of the Internet to do it. A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint). | |||||||||||||||||
| |||||||||||||||||
| ▲ | kingstnap 3 days ago | parent | prev [-] | ||||||||||||||||
I think GPT 5 is pretty good. My use case is vscode copilot and the GPT 5 Codex model and the 5 mini model are a lot better than 4.1. o4 mini was pretty good too. Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy. | |||||||||||||||||
| |||||||||||||||||