| ▲ | jackfranklyn 2 days ago | |
I've been using AI assistance on a ~30k LOC production codebase. My take: the "vibe coding" framing is misleading because it implies two modes - understand everything or understand nothing. What actually happens is tiered understanding. I might vibe-code a utility function (don't care about implementation, just that it works), but I need to deeply understand data flow and business logic boundaries. The 20k LOC weekend SaaS stories are real but missing context. Either the person has deep domain knowledge (knows WHAT to build, AI helps with HOW), or it's mostly boilerplate with thin business logic. For complex systems, the testing point is key. AI generates code faster than you can verify behaviour. The bottleneck shifts from "writing code" to "specifying behaviour precisely enough that you know when it's right". That part isn't going away. The people I see struggling aren't the ones who don't understand their code - it's the ones who don't understand their requirements. | ||
| ▲ | pigon1002 2 days ago | parent [-] | |
For me, this isn’t just a curiosity—it’s a question of whether I need to completely change the way I work. Given the same conditions, I’d rather revisit the domain knowledge and get better at instructing AI than write everything by hand. Outside the areas AI struggles with or really deep domain expertise, I feel like comparing productivity just doesn’t make sense anymore. | ||