| ▲ | codingblink 3 days ago | |
One of the main weaknesses with current AI is they don't know how to modularize unless you explicitly say it in their prompt, or they will modularize but "forget" they included a feature in file B, so they redundantly type it in file A, causing features to break further down the line. Modularizing code is important and a lot of devs will learn this, I once had 2k-line files at the beginning of my career (this was before AI) and I now usually keep files between 100 and 500 lines (but not just because of AI). While I rarely use AI on my code, if I want to type my program into a local LLM that only has between 8-32k context (depends on the LLM), I need to keep it small to allow space for my prompt and other things. Even as a human it's much easier to edit the code when it's modular. I used to like everything in one file but not anymore, since with a modular codebase you can import a function into 2 different files, so changing it in one place will change it everywhere. TLDR: Modularizing your code makes it easier for both you (as a human) and an AI assistant to review your codebase, and reduces the risk of redundant development, which AI frequently does unknowingly. | ||
| ▲ | ting0 3 days ago | parent [-] | |
There needs to be a better harness than what we have. It feels like we're in the stone age with Claude Code etc. Having control over the harness locally, and combining it with local inference and analysis, seems to be the way forward. The modularization and maintaining the abstraction are the main things that result in slop. That also requires deterministic memory though. | ||