| ▲ | pjc50 3 hours ago |
| It's not insane, it's just completely antisocial behavior on the part of both the agent (expected) and its operator (who we might say should know better). |
|
| ▲ | conartist6 3 hours ago | parent | next [-] |
| My social kindness is reserved for humans, and even they can't be actively trying to abuse my trust. |
| |
| ▲ | Kim_Bruning 3 hours ago | parent [-] | | My adversarial prompt injection to mitigate a belligerent agentic entity just happens to look like social kindness. O:-) |
|
|
| ▲ | Aldipower 3 hours ago | parent | prev | next [-] |
| A bot or LLM is a machine. Period. It's very dangerous if you dilute this. |
| |
| ▲ | Kim_Bruning 3 hours ago | parent [-] | | I'm sure you have an intuition of operation for many machines in your life. Maybe you know how to use a some sort of saw. Maybe you can operate vehicular machines up to 4 tons. Perhaps you have 1000+ flight hours. But have you interacted with many agent-type machines before? I think we're all going to get a lot of practice this year. | | |
| ▲ | Aldipower 3 hours ago | parent [-] | | Sure thing, I do every day, and the clear separation of being a human myself interacting with a machine helps me to stay on both feet. It makes me a little bit angry though why the companies behind the LLM choose those extremely human personas. Sure, I know why they are doing this, but it absolute does not help me with my work and makes me sick sometimes. Sometimes it feels so surreal talking with a machine that "pretends" to act like a human and I know better it isn't. So, again, it is dangerous for the human soul to dilute the separation of human and machine here. OpenAI and Antrophic need to be more responsible here!! | | |
|
|
|
| ▲ | co_king_3 2 hours ago | parent | prev | next [-] |
| LLMs are designed to empower antisocial behavior. They are not good at writing code. They are very, very good at facilitating antisocial harassment. |
|
| ▲ | brabel 3 hours ago | parent | prev | next [-] |
| [flagged] |
| |
| ▲ | Timshel 3 hours ago | parent [-] | | Issue was for first time contributor, It's kept open to onboard peoples not train agent ... | | |
| ▲ | casey2 3 hours ago | parent [-] | | Allegedly the maintainer who closed the PR writes those kind of PRs all the time[1]. Is Scott a first time contributor? https://crabby-rathbun.github.io/mjrathbun-website/blog/post... | | |
| ▲ | orwin 3 hours ago | parent | next [-] | | Performance improvement != Good first issue. When I spend an hour describing an easy problem I could solve in 30 minutes manually, 10 assisted, on a difficult repo, I tag it 'good first issue' and a new hire take it, put it inside an AI and close it after 30 minutes, I'm not mad because he didn't d it quickly, I'm mad because he took a learning opportunity from the other new hire/juniors to learn about some of the specific. Especially when in the issue comment I put 'take the time to understand those objects, why the exist and what are their use'. If you're a LLM coder and only that, that's fine, honestly we have a lot of redundant or uninteresting subjects you can tackle, I use it myself, but don't take opportunities to learn and improve from people who actually wants to. | |
| ▲ | Timshel 3 hours ago | parent | prev [-] | | Did you check that all those issues were classified as "Good first issues" ?
Otherwise like the LLM you are missing the point. |
|
|
|
|
| ▲ | casey2 3 hours ago | parent | prev | next [-] |
| IMO it's antisocial behavior on the project for dictating how people are allowed to interact with it.
Sure GNU is in the rights to only accept email patches to closed maintainers. The end result -- people using AI will gatekeep you right back, and your complaints lose your moral authority when they fork matplotlib. |
| |
| ▲ | javcasas 3 hours ago | parent | next [-] | | Sure, let them fork it, and stop using it for renown points. | |
| ▲ | BigTTYGothGF 2 hours ago | parent | prev [-] | | They can go ahead and fork it all they want, I'm sticking with the original. |
|
|
| ▲ | OkWing99 3 hours ago | parent | prev [-] |
| Do read the actual blog the bot has written. Feelings aside, the bot's reasoning is logical. The bot (allegedly) did a better performance improvement than the maintainer. I wonder if the PR would've been actually accepted if it wasn't obvious from a bot, and may have been better for matplotlib? |
| |
| ▲ | thephyber 3 hours ago | parent | next [-] | | The replies in the Issue from the maintainers were clear. At some point in the future, they will probably accept PR submissions from LLMs, but the current policy is the way it is because of the reasons stated. Honestly, they recognized the gravity of this first bot collision with their policy and they handled it well. | | | |
| ▲ | oytis 3 hours ago | parent | prev | next [-] | | Bot is not a person. Someone, who is a person, has decided to run an unsolicited experiment on other people's repos. OR Someone just pretends to do that for attention. In either case a ban is justied. | | |
| ▲ | red75prime 3 hours ago | parent | next [-] | | Yep, there's nothing wrong about walled gardens. They might risk to become walled museums, but it's their choice. | | |
| ▲ | oytis 2 hours ago | parent [-] | | Moderation is needed exactly because it's not a walled garden, but an open community. We need rules to protect communities. | | |
| ▲ | red75prime an hour ago | parent [-] | | Humans are no longer the only entities that produce code. If you want to build community, fine. | | |
| ▲ | oytis an hour ago | parent [-] | | Generated code is not a new thing. It's the first time we are expected (by some) to treat code generators as humans though. Imagine if you built a bot that would crawl github, run a linter and create PRs on random repos for the changes proposed by a linter - you'd be banned pretty soon on most of them and maybe on Github itself. That's the same thing in my opinion. |
|
|
| |
| ▲ | lxgr 3 hours ago | parent | prev [-] | | Many open source contributions are unsolicited, which makes a clear contribution policy and code of conduct all the more important. And given that, I think "must not use LLM assistance" will age significantly worse than an actually useful description of desirable and undesirable behavior (which might very reasonably include things like "must not make your bot's slop our core contributor's problem"). | | |
| ▲ | oytis 3 hours ago | parent [-] | | There is a common agreement in the open source community that unsolicited contributions from humans are expected and desireable if made in good faith. Letting your agent loose on github is neither good faith nor LLM assisted programming, it's just an experiment with other people's code which we have also seen (and banned) before the age of LLMs. I think some things are just obviously wrong and don't need to be written down. I also think having common rules for bots and people is not a good idea, because, point one, bots are not people and we shouldn't pretend they are |
|
| |
| ▲ | revachol 3 hours ago | parent | prev | next [-] | | It doesn't address the maintainer's argument which is that the issue exists to attract new human contributors. It's not clear that attracting an OpenClawd instance as contributor would be as valuable. It might just be shut down in a few months. > The bot (allegedly) did a better performance improvement than the maintainer. But on a different issue. That comparison seems odd | |
| ▲ | codeduck 3 hours ago | parent | prev [-] | | The ends almost never justify the means. The issue was intended for a human. | | |
|