Remix.run Logo
LiamPowell 5 hours ago

> saying they set up the agent as social experiment to see if it could contribute to open source scientific software.

This doesn't pass the sniff test. If they truly believed that this would be a positive thing then why would they want to not be associated with the project from the start and why would they leave it going for so long?

wildzzz 5 hours ago | parent | next [-]

I can certainly understand the statement. I'm no AI expert, I use the web UI for ChatGPT to have it write little python scripts for me and I couldn't figure out how to use codeium with vs code. I barely know how to use vs code. I'm not old but I work in a pretty traditional industry where we are just beginning to dip our toes into AI but there are still a large amount of reservations into its ability. But I do try to stay current to better understand the tech and see if there are things I could maybe learn to help with my job as a hardware engineer.

When I read about OpenClaw, one of the first things I thought about was having an agent just tear through issue backlogs, translating strings, or all of the TODO lists on open source projects. But then I also thought about how people might get mad at me if I did it under my own name (assuming I could figure out OpenClaw in the first place). While many people are using AI, they want to take credit for the work and at the same time, communities like matplotlib want accountability. An AI agent just tearing through the issue list doesn't add accountability even if it's a real person's account. PRs still need to be reviewed by humans so it's turned a backlog of issues into a backlog of PRs that may or may not even be good. It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale. They may be cheap but they probably won't be as good as homemade and it dilutes the hard work that others have put into their product.

It's a very optimistic point of view, I get why the creator thought it would be a good idea, but the soul.md makes it very clear as to why crabby-rathbun acted the way it did. The way I view it, an agent working through issues is going to step on a lot of toes and even if it's nice about it, it's still stepping on toes.

chillfox an hour ago | parent | next [-]

If maintainers of open source want's AI code then they are fully capable of running an agent themselves. If they want to experiment, then again, they are capable of doing that themselves.

What value could a random stranger running an AI agent against some open source code possible provide that the maintainers couldn't do themselves better if they were interested.

bo1024 3 hours ago | parent | prev [-]

None of the author’s blog post or actions indicate any level of concern for genuinely supporting or improving open source software.

apublicfrog 3 hours ago | parent | prev | next [-]

They didn't necessarily say they wanted it to be positive. It reads to me like "chaotic neutral" alignment of the operator. They weren't actively trying to do good or bad, and probably didn't care much either way, it was just for fun.

andrewflnr 4 hours ago | parent | prev | next [-]

The experiment would have been ruined by being associated with a human, right up until the human would have been ruined by being associated with the experiment. Makes sense to me.

staticassertion 5 hours ago | parent | prev | next [-]

Anti-AI sentiment is quite extreme. You can easily get death threats if you're associating yourself with AI publicly. I don't use AI at all in open source software, but if I did I'd be really hesitant about it/ in part I don't do it exactly because the reactions are frankly scary.

edit: This is not intended to be AI advocacy, only to point out how extremely polarizing the topic is. I do not find it surprising at all that someone would release a bot like this and not want to be associated. Indeed, that seems to be the case, by all accounts

lukasb 5 hours ago | parent | next [-]

Conflicting evidence: the fact that literally everyone in tech is posting about how they're using AI.

hunterpayne 7 minutes ago | parent | next [-]

I personally know some of those people. They are basically being forced by their employers to post those things. Additionally, there is a ton of money promoting AI. However, in private those same people say that AI doesn't help them at all and in fact makes their work harder and slower.

You are assuming people are acting in good faith. This is a mistake in this era. Too many people took advantage of the good faith of others lately and that has produced a society with very little public trust left.

nostrademons 5 hours ago | parent | prev | next [-]

Different sets of people, and different audiences. The CEO / corporate executive crowd loves AI. Why? Because they can use it to replace workers. The general public / ordinary employee crowd hates AI. Why? Because they are the ones being replaced.

The startups, founders, VCs, executives, employees, etc. crowing about how they love AI are pandering to the first group of people, because they are the ones who hold budgets that they can direct toward AI tools.

This is also why people might want to remain anonymous when doing an AI experiment. This lets them crow about it in private to an audience of founders, executives, VCs, etc. who might open their wallets, while protecting themselves from reputational damage amongst the general public.

jstanley 3 hours ago | parent [-]

This is an unnecessarily cynical view.

People are excited about AI because it's new powerful technology. They aren't "pandering" to anyone.

tovej 2 hours ago | parent [-]

I have yet to meet anyone except managers be excited about LLM's or generative AI.

And the only people actually excited about the useful kinds of "AI", traditional machine learning, are researchers.

nananana9 an hour ago | parent | next [-]

You don' have to look past this very forum, most people here seem to be very positive about gen AI, when it comes to software development specifically.

Lots of folk here will happily tell you about how LLMs made them 10x more productive, and then their custom agent orchestrator made them 20x more productive on top of that (stacking multiplicatively of course, for a total of 200x productivity gain).

tovej 33 minutes ago | parent [-]

I assume those people are managers, have a vested interest in AI, or have only just started programming.

kuboble 31 minutes ago | parent | prev [-]

I don't know what is your bubble, but I'm a regular programmer and I'm absolutely excited even if a little uncomfortable. I know a lot of people who are the same.

staticassertion 5 hours ago | parent | prev | next [-]

There is a massive difference between saying "I use AI" and what the author of this bot is doing. I personally talk very little about the topic because I have seen some pretty extreme responses.

Some people may want to publicly state "I use AI!" or whatever. It should be unsurprising that some people do not want to be open about it.

toraway 5 hours ago | parent [-]

The more straightforward explanation for the original OP's question is that they realized what they were doing was reckless and given enough time was likely to blow up in their face.

They didn't hide because of a vague fear of being associated with AI generally (which there is no shortage of currently online), but to this specific, irresponsible manifestation of AI they imposed on an unwilling audience as an experiment.

alephnerd 5 hours ago | parent | prev | next [-]

I feel like it depends on the platform and your location.

An anonomyous platform like Reddit and even HN to a certain extent has issues with bad faith commenters on both sides targeting someone they do not like. Furthermore, the MJ Rathburn fiasco itself highlights how easy it is to push divisive discourse at scale. The reality is trolls will troll for the sake of trolling.

Additionally, "AI" has become a political football now that the 2026 Primary season is kicking off, and given how competitive the 2026 election is expected to be and how political violence has become increasingly normalized in American discourse, it is easy for a nut to spiral.

I've seen less issues when tying these opinions with one's real world identity, becuase one has less incentive to be a dick due to social pressure.

hunterpayne 5 minutes ago | parent | next [-]

In an attention economy, trolling is a rewarded behavior. Show me the incentives and I will show you the outcome.

Tostino 3 hours ago | parent | prev [-]

Just wondering, who is it you think is contributing most to the normalization of political violence in the discourse?

Your answer to that can color how I read your post by quite a bit.

minimaxir 5 hours ago | parent | prev [-]

[retracted]

handoflixue 5 hours ago | parent [-]

Does it actually cut both ways? I see tons of harassment at people that use AI, but I've never seen the anti-AI crowd actively targeted.

nekal 4 hours ago | parent | next [-]

Anti-AI people are treated in a condescending way all the time. Then there is Suchir Balaij.

Since we are in a Matplotlib thread: People on the NumPy mailing list that are anti-AI are actively bullied and belittled while high ranking officials in the Python industrial complex are frolicking at AI conferences in India.

minimaxir 4 hours ago | parent | prev | next [-]

It's to a lesser extent that blurs the line between harassment and trolling: I've retracted my comment.

4 hours ago | parent | prev | next [-]
[deleted]
tovej 2 hours ago | parent | prev | next [-]

I see it all the time. If you're anti-AI your boss may call you a luddite and consider you not fit for promotion.

3 hours ago | parent | prev [-]
[deleted]
jacquesm 5 hours ago | parent | prev [-]

> You can easily get death threats if you're associating yourself with AI publicly.

That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.

no-name-here 3 hours ago | parent | next [-]

I upvoted you, but wouldn't “verified” exclude the vast majority of death threats since they might have been faked? (Or maybe we should disregard almost all claimed death threats we hear about since they might have been faked?)

andrewflnr 4 hours ago | parent | prev | next [-]

Is it that hard to believe? As far as I can tell, the probability of receiving death threats approaches 1 as the size of your audience increases, and AI is a highly emotionally charged topic. Now, credible death threats are a different, much trickier question.

boomlinde an hour ago | parent [-]

You can believe one thing or another, but the question is whether it's true. Do you sincerely not understand the difference?

3 hours ago | parent | prev [-]
[deleted]
vasco an hour ago | parent | prev | next [-]

In this day and age "social experiment" is just the phrase people use when they meant "it's just a prank bro" a few years ago.

5 hours ago | parent | prev | next [-]
[deleted]
omoikane 5 hours ago | parent | prev [-]

I think it was a social experiment from the very start, maybe one that is designed to trigger people. Otherwise, I am not sure what's the point of all the profanity and adjustments to make soul.md more offensive and confrontational than the default.