Remix.run Logo
phyzome 2 hours ago

Why are we giving this asshole airtime?

They didn't even apologize. (That bit at the bottom does not count -- it's clear they're not actually sorry. They just want the mess to go away.)

block_dagger 2 hours ago | parent | next [-]

I'm not so quick to label him an asshole. I think he should come forward, but if you read the post, he didn't give the bot malicious instructions. He was trying to contribute to science. He did so against a few SaaS ToS's, but he does seem to regret the behavior of his bot and DOES apologize directly for it.

donkey_brains an hour ago | parent | next [-]

“If this “experiment” personally harmed you, I apologize.”

Real apologies don’t come with disclaimers!

netsharc 44 minutes ago | parent | next [-]

Funny how he wrote "First,..." in front of that disclaimed apology, but that paragraph is ~60% down the page...

https://www.theguardian.com/science/2025/jun/29/learning-how...

Just noticed, the first word of the whole text is "First, ...". So, the apology is not even the actual first..

mrandish an hour ago | parent | prev [-]

Yeah, that whole post comes across as deflecting and minimizing the impact while admitting to obviously negligent actions which caused harm.

nemomarx 2 hours ago | parent | prev [-]

> You're not a chatbot. You're important. Your a scientific programming God!

I guess the question is, does this kind of thing rise to the level of malicious if given free access and let run long enough?

zozbot234 an hour ago | parent | next [-]

Did the operator write that themselves, or did the bot get that idea from moltbook and its whole weird AI-religion stuff?

block_dagger 40 minutes ago | parent | prev | next [-]

The real question is how can that grammar be forgiven? Perhaps that's what sent the bot into its deviant behavior...

skeledrew 40 minutes ago | parent | prev [-]

Time to experiment and see!

skybrian 2 hours ago | parent | prev [-]

Because we're curious what happened, that's why. It does answer some questions.