Remix.run Logo
theahura 4 hours ago

@Scott thanks for the shout-out. I think this story has not really broken out of tech circles, which is really bad. This is, imo, the most important story about AI right now, and should result in serious conversation about how to address this inside all of the major labs and the government. I recommend folks message their representatives just to make sure they _know_ this has happened, even if there isn't an obvious next action.

user34283 24 minutes ago | parent | next [-]

Important how? It seems next to irrelevant to me.

Someone set up an agent to interact with GitHub and write a blog about it. I don't see what you think AI labs or the government should do in response.

greggoB 9 minutes ago | parent [-]

> Someone set up an agent to interact with GitHub and write a blog about it

I challenge you to find a way to be even more dishonest via omission.

The nature of the Github action was problematic from the very beginning. The contents of the blog post constituted a defaming hit-piece. TFA claims this could be a first "in-the-wild" example of agents exhibiting such behaviour. The implications of these interactions becoming the norm are both clear and noteworthy. What else do you think is needed, a cookie?

protocolture 4 hours ago | parent | prev [-]

Its only the most important story if you can prove the OP didnt fabricate this entire scenario for attention.

hxugufjfjf 3 hours ago | parent | next [-]

I don’t think the burden of proof lies on OP here. I also don’t think he fabricated it.

protocolture 3 hours ago | parent [-]

If he wasnt getting the vast majority of the attention from publishing about it I would agree.

simonw 4 hours ago | parent | prev | next [-]

That's a bizarre thing to accuse someone of doing.

Avicebron 3 hours ago | parent | next [-]

It's not really... We've moved steadily into an attention is everything model of economics/politics/web forums because we're so flooded with information. Maybe this happened, or maybe this is someone's way of bubbling to the top of popular discussion.

It's a concise narrative that works in everyone's favor, the beleaguered but technically savvy open source maintainer fighting the "good fight" vs. the outstandingly independent and competent "rogue AI."

My money is that both parties want it to be true. Whether it is or not isn't the point.

polotics 3 hours ago | parent | prev | next [-]

The risk/reward equation on the attention a matplotlib maintainer gets... makes me think the likelihood of a fake is zero percent.

cube00 an hour ago | parent [-]

He's more then a "matplotlib maintainer", he's also a full time founder of a one-year old start up "to give spacecraft operators the tools they need to ensure their satellites can survive long-term in a turbulent space weather environment."

delaminator an hour ago | parent | prev [-]

https://www.fakehatecrimes.org/

MattRix 4 hours ago | parent | prev [-]

Anyone who has used OpenClaw knows this is VERY plausible. I don’t know why someone would go through all the effort to fake it. Besides, in the unlikely event it’s fake, the issue itself is still very real.

protocolture 3 hours ago | parent [-]

I think its very plausible in both directions. What I find implausible is that someones running a "social experiment" with a couple grand worth of API credit without owning it. Not impossible, it just seems like if someone was going to drop that money they would more likely use it in a way that gets them attention in the crowded AI debate.

mschild an hour ago | parent [-]

I think the social experiment is a cop-out used after it failed. If the PR was accepted, we'd probably see a blog post show up on HN saying that agents can successfully contribute to open source.