Remix.run Logo
OkWing99 3 hours ago

Do read the actual blog the bot has written. Feelings aside, the bot's reasoning is logical. The bot (allegedly) did a better performance improvement than the maintainer.

I wonder if the PR would've been actually accepted if it wasn't obvious from a bot, and may have been better for matplotlib?

thephyber 3 hours ago | parent | next [-]

The replies in the Issue from the maintainers were clear. At some point in the future, they will probably accept PR submissions from LLMs, but the current policy is the way it is because of the reasons stated.

Honestly, they recognized the gravity of this first bot collision with their policy and they handled it well.

lostmsu an hour ago | parent [-]

What policy are you referring to? Is there a document?

oytis 3 hours ago | parent | prev | next [-]

Bot is not a person.

Someone, who is a person, has decided to run an unsolicited experiment on other people's repos.

OR

Someone just pretends to do that for attention.

In either case a ban is justied.

red75prime 3 hours ago | parent | next [-]

Yep, there's nothing wrong about walled gardens. They might risk to become walled museums, but it's their choice.

oytis 2 hours ago | parent [-]

Moderation is needed exactly because it's not a walled garden, but an open community. We need rules to protect communities.

red75prime an hour ago | parent [-]

Humans are no longer the only entities that produce code. If you want to build community, fine.

oytis an hour ago | parent [-]

Generated code is not a new thing. It's the first time we are expected (by some) to treat code generators as humans though.

Imagine if you built a bot that would crawl github, run a linter and create PRs on random repos for the changes proposed by a linter - you'd be banned pretty soon on most of them and maybe on Github itself. That's the same thing in my opinion.

lxgr 3 hours ago | parent | prev [-]

Many open source contributions are unsolicited, which makes a clear contribution policy and code of conduct all the more important.

And given that, I think "must not use LLM assistance" will age significantly worse than an actually useful description of desirable and undesirable behavior (which might very reasonably include things like "must not make your bot's slop our core contributor's problem").

oytis 3 hours ago | parent [-]

There is a common agreement in the open source community that unsolicited contributions from humans are expected and desireable if made in good faith. Letting your agent loose on github is neither good faith nor LLM assisted programming, it's just an experiment with other people's code which we have also seen (and banned) before the age of LLMs.

I think some things are just obviously wrong and don't need to be written down. I also think having common rules for bots and people is not a good idea, because, point one, bots are not people and we shouldn't pretend they are

revachol 3 hours ago | parent | prev | next [-]

It doesn't address the maintainer's argument which is that the issue exists to attract new human contributors. It's not clear that attracting an OpenClawd instance as contributor would be as valuable. It might just be shut down in a few months.

> The bot (allegedly) did a better performance improvement than the maintainer.

But on a different issue. That comparison seems odd

codeduck 3 hours ago | parent | prev [-]

The ends almost never justify the means. The issue was intended for a human.

RobotToaster 3 hours ago | parent [-]

Do the means justify the ends?