| ▲ | wizzwizz4 5 hours ago | |||||||
From the article: > There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023. We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects. Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special. | ||||||||
| ▲ | jnovek 4 hours ago | parent [-] | |||||||
The rate of hallucination has gone down drastically since 2023. As LLM coding tools continue to pare that rate down, eventually we’ll hit a point where it is comparable to the rate we naturally introduce bugs as humans programmers. | ||||||||
| ||||||||