| ▲ | frumplestlatz 2 hours ago | |
Validating the correctness of AI output seems like one of the biggest problems we are going to face. AI can generate code far faster than humans can adequately review it. My work is in formal verification, and we’re looking at how to apply what we do to putting guard rails on AI output. It’s a promising space, but there’s a long way to go, and in the meantime, I think we’re about to enter a new era of exploitable bugs becoming extremely common due to vibe coding. I vibe coded an entire LSP server — in a day — for an oddball verification language I’m stuck working in. It’s fantastic to have it, and an enormous productivity boost, but it would’ve literally taken months of work to write the same thing myself. Moreover, because it ties deeply into unstable upstream compiler implementation details, I would struggle to actually maintain it. The AI took care of all of that — but I have almost no idea what’s in there. It would be foolish to assume the code is correct or safe. | ||