Remix.run Logo
gwbas1c a day ago

> I'd MUCH rather see a holistic embrace and integration of these tools into our ecosystems. Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

That doesn't address the controversy because you are a reasonable person assuming that other people using AI are reasonable like you, and know how to use AI correctly.

The rumors we hear have to do with projects inundated with more pull requests that they can review, the pull requests are obviously low quality, and the contributors' motives are selfish. IE, the PRs are to get credit for their Github profile. In this case, the pull requests aren't opened with the same good faith that you're putting into your work.

In general, a good policy towards AI submission really has to primarily address the "good faith" issue; and then explain how much tolerance the project has for vibecoding.

pixl97 a day ago | parent | next [-]

>other people are reasonable like you

No AI needed. Spam on the internet is a great example of the amount of unreasonable people on the internet. And for this I'll define unreasonable as "committing an action they would not want committed back at them".

AI here is the final nail in the coffin that many sysadmins have been dealing with for decades. And that is that unreasonable actors are a type of asymmetric warfare on the internet, specifically the global internet, because with some of these actors you have zero recourse. AI moved this from moderately drowning in crap to being crushed under an ocean of it.

Going to be interesting to see how human systems deal with this.

LinXitoW 21 hours ago | parent | next [-]

Every order of magnitude of difference constitutes a categorical difference.

The ability to create spam instantly, fitted perfectly to any situation, and doing that 24/7, everywhere, is very different from before. Before, spam was annoying but generally different enough to tell apart. It was also (in general) never too much as to make an entire platform useless.

With AI, the entire internet IS spam. No matter what you google or look at, there's a very high chance it's AI spam. The internet is super duper extra dead.

pocksuppet 14 hours ago | parent | next [-]

And the incentive to spam. AI pull request writers feel like they're helping the project, not hurting it, so they do it a lot more.

esseph 36 minutes ago | parent | prev | next [-]

"The internet is super duper extra dead."

I get unreasonably angry when I read this statement, or similar ones.

If you mean "portions of the web I go to or my email inbox", you may be right.

But for the rest of us that hang out in one or multiple private spaces, sometimes with connections between them, the internet is better connected and easier to find people, groups, information, and interests than ever before.

PunchyHamster 9 hours ago | parent | prev [-]

And even if you figure out a reliable way to detect AI, guess what, USERS USE IT TOO for legitimate content, so you can't even use system like this. It's horrid

Two_hands 3 hours ago | parent [-]

I tried to build something: https://github.com/YM2132/PR_guard which aims to help in these cases. It's not perfect but with stronger AI detection tools (Pangram) it could be improved although the issue of cost then arises and who pays for it.

shevy-java a day ago | parent | prev | next [-]

> Spam on the internet is a great example of the amount of unreasonable people on the internet.

AI also generates spam though, so this is a much bigger problem than merely "unreasonable" people alone.

pixl97 21 hours ago | parent [-]

I mean, AI generates spam at the behest of unreasonable people currently, and we can just think of it as a powerful automated extension of other technologies. We could say it's a new problem in quantity but the same old problem in kind.

Now, with that said I don't think we're very far from automated agents causing problems all on their own.

johnmaguire 20 hours ago | parent | prev | next [-]

> AI here is the final nail in the coffin

so far*

mschuster91 20 hours ago | parent | prev [-]

> Going to be interesting to see how human systems deal with this.

At least a bunch of lawyers already got hit when their court filings cited hallucinated cases. If this trend continues, I'll not be surprised when some end up disbarred.

beachy 12 hours ago | parent [-]

This seems self-correcting. Every lawyer, and maybe court, will use AI to review the other party's filings for such things. AI overseeing what is true and what is not - nothing disturbing about that distopian future.

aleph_minus_one 18 hours ago | parent | prev | next [-]

> The rumors we hear have to do with projects inundated with more pull requests that they can review, the pull requests are obviously low quality, and the contributors' motives are selfish. IE, the PRs are to get credit for their Github profile. In this case, the pull requests aren't opened with the same good faith that you're putting into your work.

"Open source" does not mean "open contribution", i.e. just because the software is open source does not imply that your contribution (or in particular a not-high-effort contribution) is welcome.

A well-known application that is open source in the strictest sense, but not open contribution is SQLite.

throwaway2037 4 hours ago | parent [-]

Google Guava Java library is very similar -- open source, but almost never accepts outside contributions. Is the golang base library similar?

lukan 9 hours ago | parent | prev | next [-]

I see the solution as only engaging with reasonable persons and ignore the rest.

And the problem is filtering them out. That is real work that can be draining and demoralizing as unreasonable persons usually have their sad story why they are the way they are, but you cannot do therapy or coaching for random strangers while trying to get a project going.

So if people contribute good things, engage with them. If they contribute slob (AI generated or not) - you say no to them.

codebolt 9 hours ago | parent [-]

There must be a mechanism to rate the person submitting the PR. Anyone that wants to submit code to a well-known repo would first need to build a demonstrable history of making high-quality contributions to lesser known projects. I'm not very familiar with the open source scene but I'd find it very surprising if such a mechanism was not already in place. Seems like an obvious solution to the problem of vibe coders submitting slop.

happymellon 8 hours ago | parent | next [-]

> build a demonstrable history of making high-quality contributions to lesser known projects.

> Seems like an obvious solution

I'm not sure how you would rank quality of submissions for grading contributors like this. Just because a project accepted your PR doesnt make it high quality, the best we can hope for is that it was better than no accepting it?

rwmj 6 hours ago | parent [-]

I think we need one of those solution to spam checklists[1], but for AI slop.

[1] https://craphound.com/spamsolutions.txt

lukan 9 hours ago | parent | prev [-]

Oh it is a obvious solution, but not trivial to implement in a robust way.

nextaccountic 13 hours ago | parent | prev | next [-]

> The rumors we hear have to do with projects inundated with more pull requests that they can review, the pull requests are obviously low quality, and the contributors' motives are selfish.

There's a way to handle this: put an automatic AI review of every PR from new contributors. Fight fire with fire.

(Actually, this was the solution for spam even before LLMs. See "A plan for SPAM" by Paul Graham. Basically, if you have a cheap but accurate filter (specially, a filter you can train for your own patterns), it should be enabled as a first line of defense. Anything the filter doesn't catch and the user had to manually mark as spam should become data to improve the filter)

Moreover, if the review detects LLM-generated content but the user didn't disclose it, maybe there should be consequences

cortesoft 12 hours ago | parent | prev | next [-]

How is an AI policy going to help prevent bad faith actors, though?

People who are doing those harmful things with AI aren’t going to stop because of a policy. They are just going to lie and not admit their submissions are AI generated.

At that point, you will still have to review the code and reject it if it is bad quality, just like you had to without an AI policy. The policy doesn’t make it any easier to filter out the bad faith AI submissions.

In fact, if we DO develop an efficient way to weed out the bad faith PRs that lie about using AI, then why do we need the policy at all? Just use that same system to weed out the bad submissions, and just skip the policy completely.

robinsonb5 7 hours ago | parent | next [-]

The point of a policy is to make a decision and then communicate that decision, so that you don't end up in a lengthy argument (or make inconsistent decisions) each time a particular situation arises.

You're right that it won't stop anyone doing harmful things with AI - all it does is codify what is and isn't considered acceptable by a project, and make it easier to justify rejections.

If a project wants to continue evaluating submissions on a case-by-base basis (and has the manpower to do it without the support of a policy) then that's entirely their choice, of course.

Serenacula 11 hours ago | parent | prev | next [-]

Some of them will lie. Plenty of people do just follow the rules or are acting in good faith though, so at the very least it can help cut it down.

izacus 6 hours ago | parent | prev | next [-]

Policies protect people on the project by making rejection of bad faith actors easier on them (less energy spent, less work needed).

They're also a statement of organizational's support for people who reject slop PRs and help when the AI using author generates a smear blog post against the reviewer like we've seen before.

PunchyHamster 9 hours ago | parent | prev [-]

If the policy will make them at least double check AI didn't put its nonsense in, that's already a win

yfw 12 hours ago | parent | prev | next [-]

The curl project is proof of this. No rumors

utopiah 9 hours ago | parent [-]

Right, I was going to ask what "rumors"? The whole thing is documented in numerous projects, so much so that typically the inevitable AI guideline discussion is directly the result of a flood of low quality "contributions" that can't be handled by people managing the project.

It's not a rumor, it's a pattern.

oliver_dr a day ago | parent | prev [-]

[dead]