| ▲ | dimitri-vs 5 hours ago | |
Hallucinations are not solved, memory is not solved, prompt injection is not solved, context limits are waaay too low at the same time tokens way too expensive to take advantage of context limits, etc. These problems have existed since the very early days of GPT-4 and there is no clear path to them being solved any time soon. You basically need AGI and we are nowhere close to AGI. | ||
| ▲ | aspenmartin 2 hours ago | parent [-] | |
All of the issue you talk about are true, and I don’t personally care about AGI it’s kind of a mishmash of a real thing and a nice package for investors but what I do care about is what has been released and what it can do All of the issues you talk about: they aren’t solved but we’ve made amazing progress on all of them. Continual learning is a big one and labs are likely close to some POCs. Token costs per unit performance rapidly goes down. GPT4 level perf costs you 10x less today than two years ago. This will continue to be the case as we just continually push efficiency up. The AGI question “are we close” tbh to me these questions are just rabbit holes and bait for flame wars because no one can decide on what it means and then even if you do (e.g. super human perf on all economically viable tasks is maybe more of a solid staring point) everyone fights about the ecological validity of evals. All I’m saying is: taking coding in a complete vacuum, we’re very very close to being at a point where it becomes so obviously beneficial and failure rates for many things fall below the critical thresholds that automating even the things people say make engineers unique (working with people to navigate ambiguous issues that they aren’t able to articulate well, making the right tradeoffs, etc) starts looking like less of a research challenge and more of an exercise in deployment | ||