▲ | dboreham 6 days ago | ||||||||||||||||
Yes, but in my experience actually no. At least not with the bleeding edge models today. I've been able to get LLMs to write whole features to the point that I'm quite surprised at the result. Perhaps I'm talking to it right (the new "holding it right"?). I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. Get that working then ask it to enhance functionality progressively, testing as we go. Then when functionality is working I ask for a refactor (often it puts 1500 loc in one file, for example), doc, improve help text, and so on. Basically the same way you'd manage a human. I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things. I say all this as a massive AI skeptic dating back to the 1980s. | |||||||||||||||||
▲ | manoDev 6 days ago | parent | next [-] | ||||||||||||||||
> I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this. Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening. | |||||||||||||||||
| |||||||||||||||||
▲ | svachalek 6 days ago | parent | prev [-] | ||||||||||||||||
All the hype is about asking an LLM to start with an empty project with loose requirements. Asking it to work on a million lines of legacy code (inadequately tested, as all legacy code) with ancient and complex contracts is a completely different experience. | |||||||||||||||||
|