| ▲ | throwaw12 8 hours ago | |||||||
> only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally I am using them in projects with >100kloc, this is not my experience. at the moment, I am babysitting for any kloc, but I am sure they will get better and better. | ||||||||
| ▲ | yencabulator 3 hours ago | parent | next [-] | |||||||
Meanwhile, in the grandparent comment: > Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI. You are in the 90%. | ||||||||
| ▲ | roywiggins 7 hours ago | parent | prev | next [-] | |||||||
It's fine at adding features on a non-vibecoded 100kloc codebase that you somewhat understand. It's when you're vibecoding from scratch that things tend to spin out at a certain point. I am sure there are ways to get around this sort of wall, but I do think it's currently a thing. | ||||||||
| ||||||||
| ▲ | christophilus 2 hours ago | parent | prev | next [-] | |||||||
I’m using it in a >200kloc codebase successfully, too. I think a key is to work in a properly modular codebase so it can focus on the correct changes and ignore unrelated stuff. That said, I do catch it doing some of the stuff the OP mentioned— particularly leaving “backwards compatibility” stuff in place. But really, all of the stuff he mentions, I’ve experienced if I’ve given it an overly broad mandate. | ||||||||
| ▲ | turnsout 7 hours ago | parent | prev [-] | |||||||
Yes, this is my experience as well. I've found the key is having the AI create and maintain clear documentation from the beginning. It helps me understand what it's building, and it helps the model maintain context when it comes time to add or change something. You also need a reasonably modular architecture which isn't incredibly interdependent, because that's hard to reason about, even for humans. You also need lots and lots (and LOTS) of unit tests to prevent regressions. | ||||||||