| ▲ | jascha_eng 18 hours ago | |
Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails. Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario. | ||
| ▲ | yuedongze 18 hours ago | parent | next [-] | |
indeed, i see verification debt outweighing tradition tech debt very very soon... | ||
| ▲ | jopsen 13 hours ago | parent | prev | next [-] | |
I would rather write the code and have AI write the tests :) And I have on occasion found it useful. | ||
| ▲ | bigbuppo 17 hours ago | parent | prev | next [-] | |
And with any luck, they don't vibe code their tests that ultimately just return true; | ||
| ▲ | catigula 18 hours ago | parent | prev [-] | |
I can automatically generate suites of plausible tests using Claude Code. If you can make as a rule "no AI for tests", then you can simply make the rule "no AI" or just learn to cope with it. | ||