| ▲ | roadside_picnic 3 hours ago | |
I used to be really excited about "agents" when I thought people were trying to build actual agents like we've been working on in the CS field for decades now. It's clear now that "agents" in the context of "AI" is really about answering the question "How can we make users make 10x more calls to our models in a way that makes it feel like we're not just squeezing money out of them?" I've seen so many people that think setting some "agents" of on a minutes to hours long task of basically just driving up internal KPIs at LLM providers is cutting edge work. The problem is, I haven't seen any evidence at all that spending 10x the number of API calls on an agent results in anything closer to useful than last year when people where purely vibe coding all the time. At least then people would interactively learn about the slop they were building. It's astounding to watch a coworker walk though through a PR with hundreds of added new files and repeatedly mention "I'm not sure if these actually work, but it does look like there's something here". Now I'm sure I'll get some fantastic "no true Scotsman" replies about how my coworkers must not be skilled enough or how they need to follow xyz pattern, but the entire point of AI was to remove the need for specialize skills and make everyone 10x more productive. Not to mention that the shift in focus on "agents" is also useful in detracting from clearly diminishing returns on foundation models. I just hope there are enough people that still remember how to code (and think in some cases) to rebuild when this house of cards falls apart. | ||