| ▲ | imiric 8 hours ago | |||||||||||||||||||||||||
The idea that this technology isn't useful is as ignorant as thinking that there is no "AI" bubble. Of course there is a bubble. We can see it whenever these companies tell us this tech is going to cure diseases, end world hunger, and bring global prosperity; whenever they tell us it's "thinking", can "learn skills", or is "intelligent", for that matter. Companies will absolutely devalue and the market will crash when the public stops buying the snake oil they're being sold. But at the same time, a probabilistic pattern recognition and generation model can indeed be very useful in many industries. Many of our problems can be approached by framing them in terms of statistics, and throwing data and compute at them. So now that we've established that, and we're reaching diminishing returns of scaling up, the only logical path forward is to do some classical engineering work, which has been neglected for the past 5+ years. This is why we're seeing the bulk of gains from things like MCP and, now, "agents". | ||||||||||||||||||||||||||
| ▲ | NitpickLawyer 7 hours ago | parent [-] | |||||||||||||||||||||||||
> This is why we're seeing the bulk of gains from things like MCP and, now, "agents". This is objectively not true. The models have improved a ton (with data from "tools" and "agentic loops", but it's still the models that become more capable). Check out [1] a 100 LoC "LLM in a loop with just terminal access", it is now above last year's heavily harnessed SotA. > Gemini 3 Pro reaches 74% on SWE-bench verified with mini-swe-agent! | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||