Remix.run Logo
0123456789ABCDE 2 days ago

something i am missing in this area is education and services.

if, during an automated code review, claude finds a vulnerability in a dependency, where should i direct it to share the findings?

who would be willing to take the slop-report, and validate it?

i've never done vulnerability disclosure, yet, with opus at max effort, i have found some security issues in popular frameworks/libraries i depend on.

a proper report can't be one pass, it has to validate it's a real problem, but ask opus to do that and you run the risk of the api refusing the request, endangering your account status. you ask to do it anyway, and write a report and now, you're burning tokens on a report that's likely to be ignore, because slop.

so i sit on this, and hope it doesn't hit me.

hedgehog 2 days ago | parent | next [-]

It often takes strong understanding of the upstream codebase and roadmap to write a good patch. It's easy enough to write a rough PoC and draft patch but getting all the way through the cycle takes up a bunch of time both from you and the maintainers (who are often already overloaded). My advice would be to draft a bunch privately, take one of the highest impact all the way through a deployed fix, and then plan based on what you learn. Some people's answer is to maintain private forks with automated fixes applied, with a periodic rebase on upstream.

0123456789ABCDE 2 days ago | parent [-]

i'm well aware that a pull-request with a fix is a lot of work. i don't pretend to have the capacity to do this, with all the rest i have to attend to.

it just doesn't sit well with me that, i am aware of something being broken, and not telling about it to someone who would otherwise want to know about it.

hedgehog 2 days ago | parent [-]

In my opinion maintainers can easily run a "hey robot, scan my code for risky patterns" to get a rough list, or they can solicit unreviewed contributions, but otherwise better not to add noise.

0123456789ABCDE 2 days ago | parent | prev | next [-]

i'd be happy to use an official skill for vulnerability reporting

the skill would be manually triggered when vulnerabilities are found; do another pass for details; version, files, lines, then write a lightweight report and submit somewhere. anthropic could host this, or work with h1 to do that. when the models have extra capacity a process comes around and picks up these reports one by one, does another check, maybe with proof-of-concept, reports through proper channels.

esafak 2 days ago | parent | prev [-]

Share it in the repo's issues, discussions, or chat?

0123456789ABCDE 2 days ago | parent [-]

that would be full disclosure, i don't particularly dislike the idea, but it's slop, the devs are already overwhelmed, i don't fully understand the legal implications i would be exposed to.

esafak 2 days ago | parent [-]

You can omit the details and share them on request.