| ▲ | gobdovan 4 hours ago | |||||||
I think a lot of this is exposing a change in assumed context, but it seems better to adapt to the new trends than discontinue security programs. AI lets good-faith bug hunters look through more repos they are not deeply familiar with. They may recognize a bad pattern quickly, almost like a very specialized static-analysis rule. But without project context, it is not always clear whether something is a real bug, a footgun, expected behavior, or just out of scope. The blog shows obvious slop examples, but I think borderline accepted vs rejected examples would be more useful. They would help people understand what is worth reporting and what would just drain maintainers. It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used". | ||||||||
| ▲ | cyclopeanutopia 4 hours ago | parent [-] | |||||||
> It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used". And why would they tell the truth? | ||||||||
| ||||||||