▲ | TrinaryWorksToo 2 days ago | |||||||
How do we know this isn't Survivorship Bias? Perhaps there aren't any low-severity bugs because they're all high severity? | ||||||||
▲ | dmonroy 2 days ago | parent | next [-] | |||||||
That's absolutely a factor here. We are missing the stuff that no one is talking about: "AI generated inefficient loop" or "AI forgot to close file handle". The documented cases were documented precisely because they were worthy. That said, even with survivorship bias, there's a pattern. When humans write bad code, we see the full spectrum, form typos to total meltdowns. With AI, the failures cluster around specific security fundamentals: - Input validation - Auth checks - Rate limiting I've seen no AI typo, have you? Does it mean AI learned to code from tutorials that skip the boring security chapters?... think about it. So yes, we are definitely seeing survivor bias in severity reporting. But the "types" of survivors tell us something important about what AI consistently misses. The low-severity bugs probably exist, but perhaps not making headlines. The real question: if this is just the visible part of the iceberg, what's underneath? | ||||||||
▲ | hinkley 13 hours ago | parent | prev | next [-] | |||||||
The fact that they don't mention them makes them the most likely case. "Did you hit your wife?" "I haven't murdered anybody." "Murder?? Nobody mentioned murder, Mr Fieldman." | ||||||||
▲ | dfcheng a day ago | parent | prev [-] | |||||||
This is what I’ve experienced having LLMs code: ensuring security is not an adequate part of its training. Of course, modern developers I work with don’t give a shit either. | ||||||||
|