| ▲ | candiddevmike 2 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Skeptic here: I do think LLMs are a fad for software development. They're an interesting phenomen that people have convinced themselves MUST BE USEFUL in the context of software development, either through ignorance or a sense of desperation. I do not believe LLMs will be used long term for any kind of serious software development use cases, as the maintenance cost of the code they produce will run development teams into bankruptcy. I also believe the current generations of LLMs (transformers) are technical dead ends on the path to real AGI, and the more time we spend hyping them, the less research/money gets spent on discovering new/better paths beyond transformers. I wish we could go back to complaining about Kubernetes, focusing on scaling distributed systems, and solving more interesting problems that comparing winnings on a stochastic slot machine. I wish our industry was held to higher standards than jockeying bug-ridden MVP code as quickly as possible. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jodrellblank a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Here[1] is a recent submission from Simon Willison using GPT-5.2 to port a Python HTML-parsing library to JavaScript in 4.5 hours. The code passes the 9,200 test cases of html5lib-tests used by web browsers. That's a workable, usable, standards-compliant (as much as the test cases are) HTML parser in <5 hours. For <$30. While he went shopping and watched TV. The Python library it was porting from was also mostly vibe-coded[2] against the same test cases, with the LLM referencing a Rust parser. Almost no human could port 3000 lines of Python to JavaScript and test it in their spare time while watching TV and decorating a Christmas tree. Almost no human you can employ would do a good job of it for $6/hour and have it done 5 hours. How is that "ignorance or a sense of desparation" and "not actually useful"? | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | skydhash 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Another skeptic here: I strongly believe that creating new software was always easy. The real struggle is maintaining it, especially for more than one or two years. To this day, I've not seen any arguments or even a hint on reflection on how we're going to maintain all these code that the LLMs is going to generate. Even for prototyping, using a wireframe software would be faster. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | AYBABTME 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
In this year of 2025, in December, I find it untenable for anyone to hold this position unless they have not yet given LLMs a good enough try. They're undeniably useful in software development, particularly on tasks that are amenable to structured software development methodologies. I've fixed countless bugs in a tiny fraction of the time, entirely accelerated by the use of LLM agents. I get the most reliable results simply making LLMs follow the "red test, green test" approach, where the LLM first creates a reproducer from a natural language explanation of the problem, and then cooks up a fix. This works extremely well and reliably in producing high quality results. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Aurornis 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
> They're an interesting phenomen that people have convinced themselves MUST BE USEFUL in the context of software development, Reading these comments during this period of history is interesting because a lot of us actually have found ways to make them useful, acknowledging that they’re not perfect. It’s surreal to read claims from people who insist we’re just deluding ourselves, despite seeing the results Yeah they’re not perfect and they’re not AGI writing the code for us. In my opinion they’re most useful in the hands of experienced developers, not juniors or PMs vibecoding. But claiming we’re all just delusional about their utility is strange to see. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | libraryofbabel 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Thanks for articulating this position. I disagree with it, but it is similar to the position I held in late 2024. But as antirez says in TFA, things changed in 2025, and so I changed my mind ("the facts change, I change my opinions"...). LLMs and coding agents got very good about 6 months ago and myself and a lot of other seasoned engineers I respect finally starting using them seriously. For what it's worth: * I agree with you that LLMs probably aren't a path to AGI. * I would add that I think we're in a big investment bubble that is going to pop, which will create a huge mess and perhaps a recession. * I am very concerned about the effects of LLMs in wider society. * I'm sad about the reduced prospects for talented new CS grads and other entry-level engineers in this world, although sometimes AI is just used as an excuse to paper over macroeconomic reasons for not hiring, like the end of ZIRP. * I even agree with you that LLMs will lead to some maintenance nightmares in the industry. They amplify engineers' ability to produce code, and there a lot of bad engineers out there, as we all know: plenty of cowboys/cowgirls who will ship as much slop as they can get away with. They shipped unmaintainable mess before, they will ship three times as much now. I think we need to be very careful. But, if you are an experienced engineer who is willing to be disciplined and careful with your AI tools, they can absolutely be a benefit to your workflow. It's not easy: you have to move up and down a ladder of how much you rely on the tool, from true vide coding for throwaway use-once helper scripts for some dev or admin task with a verifiable answer, all the way up to hand-crafting critical business logic and only using the agent to review it and to try and break your implementation. You may still be right that they will create a lot of problems for the industry. I think the ideal situation for using AI coding agents is at a small startup where all the devs are top-notch, have many years of experience, care about their craft, and hold each other to a high standard. Very very few workplaces are that. But some are, and they will reap big benefits. Other places may indeed drown in slop, if they have a critical mass of bad engineers hammering on the AI button and no guard-rails to stop them. This topic arouses strong reactions: in another thread, someone accused me of "magical thinking" and "AI-induced psychosis" for claiming precisely what TFA says in the first paragraph: that LLMs in 2025 aren't the stochastic parrots of 2023. And I thought I held a pretty middle of the road position on all this: I detest AI hype and I try to acknowledge the downsides as well as the benefits. I think we all need to move past the hype and the dug-in AI hate and take these tools seriously, so we can identify the serious questions amidst the noise. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Xenoamorphous 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
> Skeptic here: I do think LLMs are a fad for software development. I think that’s where they’re most useful, for multiple reasons: - programming is very formal. Either the thing compiles, or it doesn’t. It’s straightforward to provide some “reinforcement” learning based on that. - there’s a shit load of readily available training data - there’s a big economic incentive; software developers are expensive | |||||||||||||||||||||||||||||||||||||||||||||||||||||