| ▲ | bwestergard 8 days ago |
| There are always two major results from any software development process: a change in the code and a change in cognition for the people who wrote the code (whether they did so directly or with an LLM). Python and Typescript are elaborate formal languages that emerged from a lengthy process of development involving thousands of people around the world over many years. They are non-trivially different, and it's neat that we can port a library from one to the other quasi-automatically. The difficulty, from an economic perspective, is that the "agent" workflow dramatically alters the cognitive demands during the initial development process. It is plain to see that the developers who prompted an LLM to generate this library will not have the same familiarity with the resulting code that they would have had they written it directly. For some economic purposes, this altering of cognitive effort, and the dramatic diminution of its duration, probably doesn't matter. But my hunch is that most of the economic value of code is contingent on there being a set of human beings familiar with the code in a manner that requires writing having written it directly. Denial of this basic reality was an economic problem even before LLMs: how often did churn in a development team result in a codebase that no one could maintain, undermining the long-term prospects of a firm? |
|
| ▲ | tikhonj 8 days ago | parent | next [-] |
| There's a classic Peter Naur paper about this from 1985: "Programming as Theory Building" https://pages.cs.wisc.edu/~remzi/Naur.pdf |
| |
|
| ▲ | AdieuToLogic 8 days ago | parent | prev | next [-] |
| > But my hunch is that most of the economic value of code is contingent on there being a set of human beings familiar with the code in a manner that requires writing having written it directly. This reminds me of a software engineering axiom: When making software, remember that it is a snapshot of
your understanding of the problem. It states to all,
including your future-self, your approach, clarity, and
appropriateness of the solution for the problem at hand.
|
| |
| ▲ | wiz21c 7 days ago | parent [-] | | Yes! But there's code and code. Not to disrespect anyone, but there is writing a new algorithm, say for optimizing the gradient descent and code to display a simple web form. The first one is usually short and requires a very deep understanding of one or two profound, new ideas. The second is usually very big and requires a shallow understanding of many not-so-new ideas (which are usually a reflection of the oroganisation that produced the code). My feeling is that, provided a sufficiently long context window, an LLM will be able to go through the second kind project very easily. It will also be very good at showing that the first kind of project is not so new after all, destroying all people who can't find really new ideas. In both case, it'll pressure institutions to have less IT specialists... As someone who trained specifically in computer sciences, I'm a bit scared :-/ | | |
| ▲ | dimitri-vs 7 days ago | parent | next [-] | | As someone that has used coding agents extensively for the past year, the problem is they "move fast and break things" a little too well. Turns out that the act of writing code makes you think through your requirements carefully and understand the full scope of the problem you are trying to solve. It's created the problem that it's a little too easy to ask the AI agent to refactor your backend and migrate to a different platform at any time and have it wipe out months of hard learned business logic that it deems "obsolete". | |
| ▲ | hdjdbdvsjsbs 7 days ago | parent | prev | next [-] | | Remember that before computers became machines, they were people!! This will just open up new frontiers ... You just need to find them ... | |
| ▲ | AdieuToLogic 7 days ago | parent | prev [-] | | > My feeling is that, provided a sufficiently long context window, an LLM will be able to go through the second kind project very easily. It will also be very good at showing that the first kind of project is not so new after all, destroying all people who can't find really new ideas. My perspective is that value is had in understanding what and why a system needs to do what it does in order to satisfy a defined need, be it algorithmic and/or business. If the need is a use-case where a web form is used, an LLM can no more replace the knowledge of why it is there than someone fulfilling a "fiver contract" could. Both might be able to complete a specific deliverable, but neither have the ability to provide value to an organization beyond the assets they produce. |
|
|
|
| ▲ | doug_durham 8 days ago | parent | prev | next [-] |
| I wonder though. One of the superpowers of LLMs is code reading. I say the tools are better and reading than writing. It is very easy to get comprehensive documentation for any code base and get understanding by asking questions. At that point does it matter that there is a living developer who understands the code? If an arbitrary person with knowledge of the technology stack can get up to speed quickly is it important to have the original developers around any more? |
| |
| ▲ | gf000 7 days ago | parent | next [-] | | Well, according to the recently linked Naur paper, the mental model for a codebase includes just as much what code wasn't written, as much what was - e.g. a decision to do this design over another, etc. This is not recoverable by AI without every meeting note and interaction between the devs/clients/etc. | | |
| ▲ | lordnacho 7 days ago | parent [-] | | Not for an old project, but if you've talked AI through building something, you've also told it "nah let's not change the interface" and similar decisions, which will sit in the context. | | |
| ▲ | closeparen 7 days ago | parent [-] | | The transcript of LLM interactions that generated code changes are not normally checked in with the code. Perhaps they should be! |
|
| |
| ▲ | throwaway290 8 days ago | parent | prev | next [-] | | I don't think LLM can generate good docs for not self documenting code:) Any obscure long function you can't figure out yourself and you're out of luck | | |
| ▲ | seba_dos1 7 days ago | parent [-] | | Yeah, when I see all those hyped people, I keep wondering: had they not spent enough time with LLMs to notice that yet, or is what they work on just so trivial for it to not matter? |
| |
| ▲ | closeparen 7 days ago | parent | prev | next [-] | | I'm not looking for documentation as an alternative to reading the code, but because I want to know elements of the programmer's state of mind that didn't make it into the code. Intentions, expectations, assumptions, alternatives considered and not taken, etc. The LLM's best guess at this is no better than mind (so far). | |
| ▲ | dhorthy 8 days ago | parent | prev | next [-] | | i spend a lot of time thinking about this. At humanlayer we have some OSS projects that are 99% written by AI, and a lot of it was written by AI under the supervision of developer(s) that are no longer at the company. Every now and then we find that there are gaps in our own understanding of the code/architecture that require getting out the old LSP and spelunking through call stacks. It's pretty rare though. | | |
| ▲ | mcny 7 days ago | parent [-] | | > It's pretty rare though. It will only get more common with time. | | |
| |
| ▲ | camgunz 7 days ago | parent | prev [-] | | > I say the tools are better and reading than writing. No way, models are much, much better at writing code than giving you true and correct information. The failure modes are also a lot easier to spot when writing code: it doesn't compile, tests got skipped, it doesn't run right, etc. If Claude Code gave you incorrect information about a system, the only way to verify is to build a pretty good understanding of that system yourself. And because you've incurred a huge debt here, whoever's building that understanding is going to take much more time to do it. Until LLMs get way closer (not entirely) to 100%, there's always gonna have to be a human in the loop who understands the code. So, in addition to the above issue you've now got a tradeoff: do you want that human to be able to manage multiple code bases but have to come up to speed on a specific one whenever intervention is necessary, or do you want them to be able to quickly intervene but only in 1 code base? More broadly, you've also now got a human resource problem. Software engineering is pretty different than monitoring LLMs: most people get into into it because they like writing code. You need software experts in the loop, but when the LLMs take the "fun" part for themselves, most SWEs are no longer interested. Thus, you're left with a small subset of an already pretty small group. Apologists will point out that LLMs are a lot better in strongly typed languages, in code bases with lots of tests, and using language servers, MCP, etc, for their actions. You can imagine more investments and tech here. The downside is models have to work much, much harder in this environment, and you still need a software expert because the failure modes are far more obscure now that your process has obviated the simple stuff. You've solved the "slop" problem, but now you've got a "we have to spend a lot more money on LLMs and a lot more money on a rare type of expert to monitor them" problem. --- I think what's gonna happen is a division of workflows. The LLM workflows will be cheap and shabby: they'll be black boxes, you'll have to pull the lever over and over again until it does what you want, you'll build no personal skills (because lever pulling isn't a skill), practically all of your revenue--and your most profitable ideas--will go to your rapacious underlying service providers, and you'll have no recourse when anything bad happens. The good workflows will be bespoke and way more expensive. They'll almost always work, there will be SLAs for when they don't, you'll have (at least some) rights when you use them, they'll empower and enrich you, and you'll have a human to talk to about any of it at reasonable times. I think jury's out on whether or not this is bad. I'm sympathetic to the "an LLM brain may be better than no brain", but that's hugely contingent on how expensive LLMs actually end up being and any deleterious effects of outsourcing core human cognition to LLMs. |
|
|
| ▲ | divan 7 days ago | parent | prev | next [-] |
| I used the "map is not a territory" to describe this context in the article about visual programming [0]. Code is a map, territory is the mental model of the problem domain the code is supposed to be solving. But, as other commentators mentioned, LLMs are so much better on reading large codebases, that it even invalidates the whole idea of this post (visualizing codebase in 3D in a fashion similar how I would do it in my head). Which kinda changes the game – if "comprehending" complex codebase becomes an easy task, maybe we won't need to keep developers' mental models and the code in constant sync. (it's an open question) [0] https://divan.dev/posts/visual_programming_go/ |
|
| ▲ | kissgyorgy 7 days ago | parent | prev | next [-] |
| It's so much easier to build a mental model of a code base with LLMs. You just ask specific questions of a subsystem and they show files, code snippets, point out the idea, etc. I just recently took the time to understood how the GIL works exactly in CPython, because I just asked a couple of questions about it, Claude showed me the relevant API and examples where can I find it. I looked it up in the CPython codebase and all of a sudden it clicked. The huge difference was that it cost me MINUTES. I didn't even bother to dig in before, because I can't perfectly read C, the CPython codebase is huge and it would have taken me a really long time to understand everything. |
|
| ▲ | diggan 7 days ago | parent | prev | next [-] |
| > It is plain to see that the developers who prompted an LLM to generate this library will not have the same familiarity with the resulting code that they would have had they written it directly I think that's a bit too simplified. Yes, a person just blindly accepting whatever the LLM generates from their unclear prompts probably won't have much understanding or familiarity with it. But that's not how I personally use LLMs, and I'm sure a lot of others too. Instead, I'm the designer/architect, with a strict control of exactly what I want. I may not actually have written the lines, but all the interfaces/APIs are human designed, the overall design/architecture is human designed, and since I designed it, I know enough to say I'd be familiar with it. And if I come back to the project in 1-2 years, even if there is no document, it's trivial to spend 10-20 minutes together with an LLM to understand the codebase from every angle, just ask pointed questions, and you can rebuild your mental image quickly. TLDR: Not everyone is a using LLMs for "vibe-coding" (blind-coding), but as an assistant sitting next to you. So my guess is that the ones who know what you need to know in order to effectively build software, will be a lot more productive. The ones who don't know that (yet?), will drown in spaghetti faster than before. |
|
| ▲ | 7 days ago | parent | prev [-] |
| [deleted] |