| ▲ | Garryslist Code Audit(twitter.com) | |||||||
| 5 points by thomasjudge 5 hours ago | 2 comments | ||||||||
| ▲ | subho007 4 hours ago | parent [-] | |||||||
The interesting thing about this audit isn't the specific bugs. It's what they reveal about the nature of AI-generated code. A human developer who ships a 0-byte AVIF to production is being careless. An AI that does it simply doesn't have the concept of "this failed." It produced a file. The file exists. Done. Same with the test harnesses in production, the 78 unused Stimulus controllers, the logo downloaded 8 times. None of these are hard problems. Any mid-level developer would catch them in review. But that's the point, there was no review proportional to the output. This is the real problem with measuring AI coding by LOC/day. Lines of code was always a bad metric. Making it 100x easier to produce didn't make it a good one. It made it 100x more dangerous. What's actually happening at 37K LOC/day is you've mass-produced decisions nobody evaluated. Some percentage of those decisions are wrong in ways that work fine locally but fail in production. And you won't find them with tests, because the AI wrote those too. The bottleneck in software was never typing. | ||||||||
| ||||||||