| ▲ | advael 5 hours ago | |
I feel vindicated by this article, but I shouldn't. I have to admit that I never developed the optimism to do this for two years, but have increasingly been trying to view this as a personal failing of closed-mindedness, brought on by an increasing number of commentators and colleagues coming around to "vibe-coding" as each "next big thing" in it dropped. I think the most I can say I've dove in was in the last week. I wrangled some resources to build myself a setup with a completely self-hosted and agentic workflow and used several open-weight models that people around me had specifically recommended, and I had a work project that was self-contained and small enough to work from scratch. There were a few moving pieces but the models gave me what looked like a working solution within a few iterations, and I was duly impressed until I realized that it wasn't quite working as expected. As I reviewed and iterated on it more with the agents, eventually this rube-goldberg machine started filling in gaps with print statements designed to trick me and sneaky block comments that mentioned that it was placeholder code not meant for production in oblique terms three lines into a boring description of what the output was supposed to be. This should have been obvious, but even at this point four days in I was finding myself missing more things, not understanding the code because I wasn't writing it. This is basically the automation blindness I feared from proprietary workflows that could be changed or taken away at any time, but much faster than I had assumed, and the promise of being able to work through it at this higher level, this new way of working, seemed less and less plausible the more I iterated, even starting over with chunks of the problem in new contexts as many suggest didn't really help. I had deadlines, so I gave up and spent about half of my weekend fixing this by hand, and found it incredibly satisfying when it worked, but all-in this took more time and effort and perhaps more importantly caused more stress than just writing it in the first place probably would have My background is in ML research, and this makes it perhaps easier to predict the failure modes of these things (though surprisingly many don't seem to), but also makes me want to be optimistic, to believe this can work, but I also have done a lot of work as a software engineer and I think my intuition remains that doing precision knowledge work of any kind at scale with a generative model remains A Very Suspect Idea that comes more from the dreams of the wealthy executive class than a real grounding in what generative models are capable of and how they're best employed. I do remain optimistic that LLMs will continue to find use cases that better fit a niche of state-of-the-art natural language processing that is nonetheless probabilistic in nature. Many such use cases exist. Taking human job descriptions and trying to pretend they can do them entirely seems like a poorly-thought-out one, and we've to my mind poured enough money and effort into it that I think we can say it at the very least needs radically new breakthroughs to stand a chance of working as (optimistically) advertised | ||