Remix.run Logo
SCdF an hour ago

Look I have no idea if this is related, but I have noticed recently, talking to other developers, that the addiction / allure of the speed that coding with AI agents gives you is leading to a relaxation of their standard quality bar. This doesn't even feel like the evil overlords whipping them more, it is self-inflicted.

When you can get multiple different agents to all work on things and you are bouncing between them, careful review of their code becomes the bottleneck. So you start lowering your bar to "good enough", where "good enough" is not really good enough. It's a new good enough, which is like you squinting at the code and as long as the shape is vaguely ok, and the code works (where that means you click around a bit and it seems fine), it's ok.

Over time you lose your "theory"[1] of the software, and I would imagine that makes you effectively lower your bar even further, because you are less attached to what good should look like.

This is all anecdotal on my end, but it does feel like quality as a whole in the industry has tanked in the last maybe 12 months? It feels like there are more outages than normal. I couldn't find a good temporal outage graph, but if you trust this: https://www.catchpoint.com/internet-outages-timeline , the number of outages in 2025 is orders of magnitude up on 2024.

Maybe this is because there are way more, maybe this is because they are now tracking way more, I'm not sure. But it definitely _feels_ like we are in for a bumpy ride over the next few years.

[1] in the Programming as Theory Building sense: https://gareth.nz/ai-programming-as-theory-building.html

bn-l 42 minutes ago | parent [-]

Exactly right and by the time you get that theory back could you have just it all yourself?

lazide 5 minutes ago | parent [-]

Do you get the impression the industry is caring about quality vs ‘good enough’ + cutting costs?