▲ | doug_durham 8 days ago | ||||||||||||||||
I wonder though. One of the superpowers of LLMs is code reading. I say the tools are better and reading than writing. It is very easy to get comprehensive documentation for any code base and get understanding by asking questions. At that point does it matter that there is a living developer who understands the code? If an arbitrary person with knowledge of the technology stack can get up to speed quickly is it important to have the original developers around any more? | |||||||||||||||||
▲ | gf000 7 days ago | parent | next [-] | ||||||||||||||||
Well, according to the recently linked Naur paper, the mental model for a codebase includes just as much what code wasn't written, as much what was - e.g. a decision to do this design over another, etc. This is not recoverable by AI without every meeting note and interaction between the devs/clients/etc. | |||||||||||||||||
| |||||||||||||||||
▲ | throwaway290 8 days ago | parent | prev | next [-] | ||||||||||||||||
I don't think LLM can generate good docs for not self documenting code:) Any obscure long function you can't figure out yourself and you're out of luck | |||||||||||||||||
| |||||||||||||||||
▲ | closeparen 7 days ago | parent | prev | next [-] | ||||||||||||||||
I'm not looking for documentation as an alternative to reading the code, but because I want to know elements of the programmer's state of mind that didn't make it into the code. Intentions, expectations, assumptions, alternatives considered and not taken, etc. The LLM's best guess at this is no better than mind (so far). | |||||||||||||||||
▲ | dhorthy 8 days ago | parent | prev | next [-] | ||||||||||||||||
i spend a lot of time thinking about this. At humanlayer we have some OSS projects that are 99% written by AI, and a lot of it was written by AI under the supervision of developer(s) that are no longer at the company. Every now and then we find that there are gaps in our own understanding of the code/architecture that require getting out the old LSP and spelunking through call stacks. It's pretty rare though. | |||||||||||||||||
| |||||||||||||||||
▲ | camgunz 7 days ago | parent | prev [-] | ||||||||||||||||
> I say the tools are better and reading than writing. No way, models are much, much better at writing code than giving you true and correct information. The failure modes are also a lot easier to spot when writing code: it doesn't compile, tests got skipped, it doesn't run right, etc. If Claude Code gave you incorrect information about a system, the only way to verify is to build a pretty good understanding of that system yourself. And because you've incurred a huge debt here, whoever's building that understanding is going to take much more time to do it. Until LLMs get way closer (not entirely) to 100%, there's always gonna have to be a human in the loop who understands the code. So, in addition to the above issue you've now got a tradeoff: do you want that human to be able to manage multiple code bases but have to come up to speed on a specific one whenever intervention is necessary, or do you want them to be able to quickly intervene but only in 1 code base? More broadly, you've also now got a human resource problem. Software engineering is pretty different than monitoring LLMs: most people get into into it because they like writing code. You need software experts in the loop, but when the LLMs take the "fun" part for themselves, most SWEs are no longer interested. Thus, you're left with a small subset of an already pretty small group. Apologists will point out that LLMs are a lot better in strongly typed languages, in code bases with lots of tests, and using language servers, MCP, etc, for their actions. You can imagine more investments and tech here. The downside is models have to work much, much harder in this environment, and you still need a software expert because the failure modes are far more obscure now that your process has obviated the simple stuff. You've solved the "slop" problem, but now you've got a "we have to spend a lot more money on LLMs and a lot more money on a rare type of expert to monitor them" problem. --- I think what's gonna happen is a division of workflows. The LLM workflows will be cheap and shabby: they'll be black boxes, you'll have to pull the lever over and over again until it does what you want, you'll build no personal skills (because lever pulling isn't a skill), practically all of your revenue--and your most profitable ideas--will go to your rapacious underlying service providers, and you'll have no recourse when anything bad happens. The good workflows will be bespoke and way more expensive. They'll almost always work, there will be SLAs for when they don't, you'll have (at least some) rights when you use them, they'll empower and enrich you, and you'll have a human to talk to about any of it at reasonable times. I think jury's out on whether or not this is bad. I'm sympathetic to the "an LLM brain may be better than no brain", but that's hugely contingent on how expensive LLMs actually end up being and any deleterious effects of outsourcing core human cognition to LLMs. |