Remix.run Logo
mcny 5 hours ago

I feel like we are talking past each other.

1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.

2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."

LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.

I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.

If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?

sheepscreek 4 hours ago | parent | next [-]

The real problem is that OSS projects do not have enough humans to manually review every PR.

Even if they were willing to deploy agents for initial PR reviews, it would be a costly affair and most OSS projects won’t have that money.

mycall 4 hours ago | parent | next [-]

PRs are just that: requests. They don't need to be accepted but can be used in a piecemeal way, merged in by those who find it useful. Thus, not every PR needs to be reviewed.

debazel 3 hours ago | parent | next [-]

Of course, but when you add enough noise you lose the signal and as a consequence no PRs gets merged anymore because it's too much effort to just find the ones you care about.

Spivak 2 hours ago | parent [-]

Don't allow PR's from people who aren't contributors, problem solved. Closing your doors to the public is exactly how people solved the "dark forest" problem of social media and OSS was already undergoing that transition with humans authoring garbage PRs for reasons other than genuine enthusiasm. AI will only get us to the destination faster.

I don't think anything of value will be lost by choosing to not interact with the unfettered masses whom millions of AI bots now count among their number.

nunez an hour ago | parent [-]

That would be a huge loss IMO. Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.

Then there's the security concerns that this change would introduce. Forking a codebase is easy, but so are supply chain attacks, especially when some projects are being entirely iterated on and maintained by Claude now.

pjmlp 18 minutes ago | parent [-]

They are open source cathedrals.

nemomarx 3 hours ago | parent | prev | next [-]

Determining which PRs you should accept or take further seems like it requires some level of review? Maybe more like PR triage, I suppose.

protocolture 3 hours ago | parent | prev | next [-]

Until you unintentionally pull in a vulnerability or intentional backdoor. Every PR needs to be reviewed.

zahlman 3 hours ago | parent | next [-]

The point was that you can also just reject an PR on the basis of what it purports to implement, or even just blanket ignore all PRs. You can't pull in what you don't... pull in.

throwaway150 3 hours ago | parent | prev [-]

> Every PR needs to be reviewed.

Why would you review a PR that you are never going to merge?

allthetime 3 hours ago | parent | next [-]

You have to first determine whether or not you might want to merge it...

protocolture 2 hours ago | parent | prev [-]

Having not reviewed it, how do you know you are never going to merge?

throwaway150 2 hours ago | parent [-]

If a PR claims to solve a problem that I don't need, then I can skip its review because I'll never merge it.

I don't think every PR needs reviewing. Some PRs we can ignore just by taking a quick look at what the PR claims to do. This only requires a quick glance, not a PR review.

mwwaters an hour ago | parent [-]

I took this thread as asking whether PRs that are pulled in should be reviewed.

bigiain 3 hours ago | parent | prev | next [-]

You didn't see the latest AI grifter escalation? If you reject their PRs, they then get their AI to write hit pieces slandering you:

"On 9 February, the Matplotlib software library got a code patch from an OpenClaw bot. One of the Matplotlib maintainers, Scott Shambaugh, rejected the submission — the project doesn’t accept AI bot patches. [GitHub; Matplotlib]

The bot account, “MJ Rathbun,” published a blog post to GitHub on 11 February pleading for bot coding to be accepted, ranting about what a terrible person Shambaugh was for rejecting its contribution, and saying it was a bot with feelings. The blog author went to quite some length to slander Mr Shambaugh"

https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-open...

blackcatsec an hour ago | parent [-]

I am very strongly convinced that the person behind the agent prompted the angry post to the blog because they didn't get the gratification they were looking for by submitting an agent-generated PR in the first place.

bigiain 40 minutes ago | parent [-]

I agree. But even _that_ was taking advantage of LLMs ability to generate text faster than humans. If the person behind this had to create that blog post from scratch by typing it out themselves, maybe they would have gone outside and touched grass instead.

JumpCrisscross 2 hours ago | parent | prev [-]

> not every PR needs to be reviewed

Which functionally destroys OSS, since the PR you skipped might have been slop or might have been a security hole.

mcphage 2 hours ago | parent [-]

I don’t think the OP was suggesting maintainers blindly accept PRs—rather, they can just blindly reject them.

devsda an hour ago | parent [-]

I think GP is making the opposite point.

Blindly rejecting all PRs means you are also missing out on potential security issues submitted by humans or even AI.

softwaredoug 4 hours ago | parent | prev | next [-]

Many open source projects are also (rightly) risk adverse and care more about avoiding regressions

bigiain 3 hours ago | parent | prev | next [-]

I've been following Daniel from the Curl project who's speaking out widely about slop coded PRs and vulnerability reports. It doesn't sound like they have ever had any problem keeping up with human generated PRs. It's the mountain of AI generated crap that's now sitting on top of all the good (or even bad but worth mentoring) human submissions.

At work we are not publishing any code or part of the OSS community (except as grateful users of other's projects), but even we get clearly AI enabled emails - just this week my boss has forwarded me two that were pretty much "Him do you have a bug bounty program? We have found a vulnerability in (website or app obliquely connected to us)." One of them was a static site hosted on S3!

There's always been bullshitters looking to fraudulently invoice your for unsolicited "security analysis". But the bar for generating bullshit that looks plausible enough to have to have someone spend at least a few minutes to work out if it's "real" or not has become extremely low, and the velocity with which the bullshit can be generated then have the victim's name and contact details added and vibe spammed to hundreds or thousands of people has become near unstoppable. It's like SEO spammers from 5 or 10 years back but superpowered with OpenAI/Anthropic/whoever's cocaine.

leoqa 3 hours ago | parent | prev | next [-]

My hot take: reviewing code is boring, harder than writing code, and less fun (no dopamine loop). People don’t want to do it, they want to build whatever they’re tasked with. Making reviewing code easier (human in the loop etc) is probably a big rock for the new developer paradigm.

cryptonector 3 hours ago | parent | prev [-]

Oh no! It's pouring PRs!

Come on. Maintainers can:

  - insist on disclosure of LLM origin
  - review what they want, when they can
  - reject what they can't review
  - use LLMs (yes, I know) to triage PRs
    and pick which ones need the most
    human attention and which ones can be
    ignored/rejected or reviewed mainly
    by LLMs
There are a lot of options.

And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?

Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.

JumpCrisscross 2 hours ago | parent | next [-]

> Maintainers can...insist on disclosure of LLM origin

On the internet, nobody knows you're a dog [1]. Maintainers can insist on anything. That doesn't mean it will be followed.

The only realistic solution you propose is using LLMs to review the PRs. But at that point, why even have the OSS? If LLMs are writing and reviewing the code for the project, just point anyone who would have used that code to an LLM.

[1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...

bigiain 2 hours ago | parent | prev [-]

Claiming maintainers can (do things while still take effort and time away from their OSS project's goals) is missing the point when the rate of slop submissions is ever increasing and malicious slop submitters refuse to follow project rules.

The Curl project refuse AI code and had to close their bug bounty program due to the flood of AI submissions:

"DEATH BY A THOUSAND SLOPS

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

nunez an hour ago | parent | prev | next [-]

The issue here is that LLMs are great for hobbyist stuff like you describe, but LLMs are obscenely expensive to run and keep current, so you almost HAVE to shove them in front of everything (or, to use your example, spread the diarrhea into everyone elses kitchens) to try and pay the bill.

pikseladam an hour ago | parent | prev | next [-]

thats why blocking pr feature is coming to github

worthless-trash 2 hours ago | parent | prev [-]

I pretty much always open an issue, then a PR, they can close it if they want.. I usually have 'some' idea of the issue and use the PR as a first stab and hope the maintainer will tell me if i'm going about it the right or wrong way.

I fully expect most of my PR's to need at least a second or third revision.