| ▲ | northfield27 2 days ago | |
I think I was right about AI coding tools, but I was probably just early: https://news.ycombinator.com/item?id=46769188#46782672 I’ve been using and experimenting with LLM-based coding tools for ~2 years, mostly in research systems, and I expected both the upside and the limitations to show up eventually. There was a period mid last year to early this year where social media hype around AI coding felt very strong. A lot of the discussion was about agents and multi-agent setups replacing large parts of engineering work. My experience didn’t really match that level of capability in practice, especially when it came to larger codebases and long-term maintainability. The gap I kept seeing was between small, impressive demos and actual production constraints like correctness, debugging, security, and system understanding. I also remember seeing “vibe coding” discussed around that time. My initial reaction was skepticism because it felt like it was abstracting away too much of the engineering process. I might be misremembering details, but even now my view is that these systems are still not very reliable without strong human structure around them. I don’t really blame individuals for the hype cycle. The incentives on social platforms and in the industry were clearly aligned toward showcasing success cases and productivity gains, so that naturally shaped the narrative. My current view is not that these tools are useless. They are clearly helpful for many tasks. But I think the more interesting problem is how to integrate them into real engineering workflows without increasing maintenance burden or accumulating hidden complexity over time. We probably need better patterns for using them in production settings, and better expectations around what they can and cannot safely automate today. | ||