| ▲ | Zigurd 21 hours ago | |||||||
It's verifier law. Coding agents are useful and good and real products because when they screw up, things stop working almost always before they can do damage. Coding agents are flawed in ways that existing tools are good at catching, never mind the more obvious build and runtime errors. Letting AI write your emails and create your P&L and cash flow projections doesn't have to run the gauntlet of tools that were created to stop flawed humans from creating bad code. | ||||||||
| ▲ | phyzome 20 hours ago | parent | next [-] | |||||||
Nah, I've seen them screw in all sorts of ways that would fail in some conditions and not others. You're way too optimistic about this. | ||||||||
| ||||||||
| ▲ | insane_dreamer 11 hours ago | parent | prev [-] | |||||||
Another problem is that the AI may try to fudge the numbers to mask its mistakes so they look like they all add up while the rot is hidden away. Just like it tries to manipulate the unit tests so they pass without fixing the _actual_ bug. I've seen it happen. | ||||||||