Remix.run Logo
gobdovan 4 hours ago

I think a lot of this is exposing a change in assumed context, but it seems better to adapt to the new trends than discontinue security programs.

AI lets good-faith bug hunters look through more repos they are not deeply familiar with. They may recognize a bad pattern quickly, almost like a very specialized static-analysis rule. But without project context, it is not always clear whether something is a real bug, a footgun, expected behavior, or just out of scope.

The blog shows obvious slop examples, but I think borderline accepted vs rejected examples would be more useful. They would help people understand what is worth reporting and what would just drain maintainers.

It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used".

cyclopeanutopia 4 hours ago | parent [-]

> It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used".

And why would they tell the truth?

gobdovan 4 hours ago | parent [-]

It doesn't really require all people to tell the truth.

If the bug hunter is acting in good faith, they can communicate how much scrutiny they think their report deserves, which may reduce maintainer frustration.

If the bug hunter is acting in bad faith, and they claim "no AI used" but the report shows obvious AI-generated content, detectable by a classifier, maintainers can dismiss it more easily.