Remix.run Logo
ipython 7 hours ago

Glad to see the common-sense rule that only humans can be held accountable for code generated by AI agents.

pixel_popping 7 hours ago | parent [-]

Literally, insane that some projects blanket-ban AI despite being the human responsibility in the end.

tom_ 4 hours ago | parent | next [-]

It is no more insane than doing the opposite. This whole business has yet to play itself out.

KoftaBob 3 hours ago | parent | prev | next [-]

It's just a form of sanctimonious virtue-signaling that's trendy right now.

daveguy 6 hours ago | parent | prev | next [-]

Not insane at all. Just a very useful shortcut. Not everyone wants to move fast and break shit.

pixel_popping 6 hours ago | parent [-]

I still think it's insane, why would you care about the "origin" of the code as long as there is a human accountable (that you can ban anyway)?

59nadir 6 hours ago | parent | next [-]

Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.

pixel_popping 6 hours ago | parent [-]

So tomorrow, if a model genuinely find a bunch of real vulnerabilities, you just would ignore them? that makes no sense.

59nadir 6 hours ago | parent [-]

An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.

jeremyjh 2 hours ago | parent | prev | next [-]

Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.

streetfighter64 6 hours ago | parent | prev [-]

If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?

pixel_popping 5 hours ago | parent [-]

If the Ouija board was powered by Opus, who knows :D

pydry 6 hours ago | parent | prev [-]

And yet it puts a stop to the tsunami of slop and it's pretty much impossible to prove anything of value was lost.

pixel_popping 6 hours ago | parent [-]

but why? it's a human making the PR and you can shame/ban that human anyway.

materielle 2 hours ago | parent | next [-]

I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.

That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.

Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.

Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”

For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?

Sure, maintainers can simply close the PR without a reason if they don’t have time.

But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.

podgietaru 5 hours ago | parent | prev | next [-]

Volume - things take time to review. If you’re inundated with so many PRs then it’s harder to curate in general

yoyohello13 6 hours ago | parent | prev | next [-]

> it's a human making the PR

Is it? Remember when that agent wrote a hit piece about the maintainer because he wouldn't merge it's PR?

pixel_popping 6 hours ago | parent [-]

That's a different issue actually.

Ekaros 3 hours ago | parent | prev [-]

You are treating humans as reasonable actors. They very often are not. On easy to access platforms like github you can have humans just working as intermediaries between LLM and the github. Not actually checking or understanding what they put in a pull request. Banning these people outright with clear rules is much faster and easier than trying to argue with them.

Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.