| ▲ | mamp 20 hours ago |
| Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works. |
|
| ▲ | lynguist 18 hours ago | parent | next [-] |
| No I think the problem is AI coding removes intentionality. And that introduces artifacts and connections and dependencies that shouldn’t be there if one had designed the system with intent. And that makes it eventually harder to reason about. There is a difference in qualia in it happens to work and it was made for a purpose. Business logic will strive more for it happens to work as a good enough. |
| |
| ▲ | satisfice 13 hours ago | parent | next [-] | | The core problem is irresponsibility. Things that happen to work may stop working, or be revealed to have terrible flaws. Who is responsible? What is their duty of care? | |
| ▲ | stoneforger 17 hours ago | parent | prev [-] | | Excellent point. The intention of business is profit, how it arrives there is considered incidental. Any product no matter what, as long as it sells. Compounding effects in computing, the internet and miniaturisation, have enabled large profit margins that further compound these effects. They think of this as a machine that can keep on printing more money and subsuming more and more as software and computers are pervasive. |
|
|
| ▲ | Animats 19 hours ago | parent | prev | next [-] |
| Including the AI, which generated it once and forgot. This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design? |
| |
| ▲ | maxbond 18 hours ago | parent | next [-] | | I have experimented with telling Claude Code to keep a historical record of the work it is performing. It did work (though I didn't assess the accuracy of the record) but I decided it was a waste of tokens and now direct it to analyze the history in ~/.claude when necessary. The real problem I was solving was making sure it didn't leave work unfinished between autocompacts (eg crucial parts of the work weren't performed and instead there are only TODO comments). But I ended up solving that with better instructions about how to break down the plan into bite-sized units that are more friendly to the todo list tool. I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor. | |
| ▲ | skeptic_ai 18 hours ago | parent | prev | next [-] | | I for one I save all conversations in the codebase. Includes both human prompts and outputs. But I’m using a modified codex to do so.
Not sure why it’s not default as it’s useful to have this info. | |
| ▲ | luckydata 18 hours ago | parent | prev [-] | | Is this an actual problem? Takes minutes for an AI to explore and document a codebase. Sounds like a non problem. | | |
| ▲ | shevy-java 18 hours ago | parent | next [-] | | Is that documentation useful? I haven't seen a well-documented codebase by AI so far. To be fair - humans also fail at that. Just look at the GTK documentation as an example. When you point that out, ebassi may ignore you because criticism is unwanted; and the documentation will never improve, meaning they don't want new developers. | |
| ▲ | ahnick 18 hours ago | parent | prev [-] | | Yes, exactly my point as well. It cuts both ways. |
|
|
|
| ▲ | sceptic123 13 hours ago | parent | prev | next [-] |
| I read it more as: We already don't know how everything works, AI is steering us towards a destination where there is more of the everything. I would also add it's also possible it will reduce the number people that are _capable_ of understanding the parts it is responsible for. |
| |
| ▲ | g947o 12 hours ago | parent [-] | | Who's "we"? I am sure engineers collectively understand how the entire stack works. With LLM generated output, nobody understands how anything works, including the very model you just interacted with -- evident in "you are absolutely correct" | | |
| ▲ | sceptic123 10 hours ago | parent [-] | | Even as a collective whole, engineers will likely only understand the parts of system that are engineering problems and solutions. Even if they could understand it all, there is still no _one_ person who understands how everything works. |
|
|
|
| ▲ | dcre 12 hours ago | parent | prev | next [-] |
| Just because there is someone who could understand a given system, that doesn’t mean there is anyone who actually does. I take the point to be that existing software systems are not understood by anyone most of the time. |
|
| ▲ | raw_anon_1111 11 hours ago | parent | prev | next [-] |
| If the average tenure of a developer is 2.5 years, how likely is it in 5 years that any of the team that started the project is still working on it? |
|
| ▲ | ahnick 19 hours ago | parent | prev | next [-] |
| This happens even today. If a knowledgeable person leaves a company and no KT (or more likely, poor KT) takes place, then there will be no one left to understand how certain systems work. This means the company will have to have a new developer go in and study the code and then deduce how it works. In our new LLM world, the developer could even have an LLM construct an overview for him/her to come up to speed more quickly. |
| |
| ▲ | stoneforger 17 hours ago | parent [-] | | Yes but every time the "why" is obscured perhaps not completely because there's no finished overview or because the original reason cannot be derived any longer from the current state of affairs. Its like the movie memento: you're trying to piece together a story from fragments that seem incoherent. |
|
|
| ▲ | noosphr 12 hours ago | parent | prev [-] |
| It's that no one knows if a system works. |