Remix.run Logo
caymanjim 5 hours ago

There's no reason to care that a human spent time on it.

Humans are bad at writing code. Garbage PRs and slop have been a problem in open source and bug bounty programs since long before AI came on the scene.

We need better AI so that there's no need to solicit external bug fixes, and better AI so other contributions can be evaluated for usefulness and quality.

What do you care if a human ever looked at it at all? It implies that humans are adding value to the process. It's possible for a human to add value. The right human can add tremendous value. But I'll take a completely autonomous AI over 99% of the human software engineers and 99% of the people contributing PRs and bugfixes.

It was hard to keep up with slop before. It's a lot harder now. AI will help weed through the garbage.

48terry 4 hours ago | parent | next [-]

If AI is already mass-producing garbage PRs and other unreliable crap, what makes AI (established as producing unreliable crap) the solution for review? What makes the reviewing AI not produce unreliable crap with regards to the review?

A magical, hypothetical AI that always gets it right and will make all these problems go away is neither a solution nor a plan. It's wishful thinking.

caymanjim 4 hours ago | parent [-]

AI in the hands of the right people is incredibly powerful. A good team of engineers with AI doing their own bug-hunting on their own code is already far better than any outsider—human, AI, or human-assisted AI—could ever do. A good internal AI-assisted team is also the only thing that can vet all other contributions. It doesn't matter if those contributions are 100% human-written, 100% AI-written, or a combination. The problem is the same.

Unless you stop accepting outside contributions at all, there's simply no way to determine if a human was involved in the process. Any mandate that all contributions come from humans will fail because there's no detection or enforcement mechanism. You have to assume it's slop either way, and improve your ability to vet it. Only another AI can do that, because we don't have enough qualified humans to keep up.

48terry 4 hours ago | parent [-]

That didn't actually address my comment or question, so I'll repeat it, I guess.

We already know AI is spamming unreliable crap and slop. The apparent solution is "more, better AI".

Why wouldn't this AI for screening all this also produce crap and slop?

Is the plan there "AI but it actually works right and doesn't produce crap and slop"?

caymanjim 4 hours ago | parent [-]

I did address it: AI in the hands of the right people.

Random contributions to bug bounty programs or random PRs for new features come from all corners: expert engineers producing fantastic code; intermediate engineers trying their hardest but producing mediocre code; junior engineers wasting everyone's time with ill-conceived poorly-written code; and all of the above with varying amounts of AI assistance. And now also purely-automated AI, where the only human involved is pointing their AI at GitHub with no guidance.

You can't stop it on the inbox side. Either you turn the inbox off, or you leverage AI to help you separate the wheat from the chaff.

bcjdjsndon 5 hours ago | parent | prev [-]

Reasonable logic but I bet you get downvoted