| ▲ | gwbas1c 5 hours ago |
| I think you're trying to abdicate someone of their responsibility. The AI is not a child; it's a thing with human oversight. It did something in the real world with real consequences. So yes, the operator has responsibility! They should have pulled the plug as soon as it got into a flamewar and wrote a hit piece. |
|
| ▲ | brainwad 2 hours ago | parent | next [-] |
| The whole point of OpenClaw bots is that they don't have (much) human oversight, right? It certainly seems like the human wasn't even aware of the bot's blog post until after the bot had written and posted it. He then told it to be more professional, and I assume that's why the bot followed up with an apology. |
|
| ▲ | apublicfrog 5 hours ago | parent | prev | next [-] |
| > It did something in the real world with real consequences. It wasn't long ago that it would be absurd to describe the internet as the "real world". Relatively recently it was normal to be anonymous online and very little responsibility was applied to peoples actions. As someone who spent most of their internet time on that internet, the idea of applying personal responsibility to peoples internet actions (or AIs as it were) feels silly. |
| |
| ▲ | retsibsi 4 hours ago | parent [-] | | That was always kind of a cruel attitude, because real people's emotions were at stake. (I'm not accusing you personally of malice, obviously, but the distinction you're drawing was often used to justify genuinely nasty trolling.) Nowadays it just seems completely detached from reality, because internet stuff is thoroughly blended into real life. People's social, dating, and work lives are often conducted online as much as they are offline (sometimes more). Real identities and reputations are formed and broken online. Huge amounts of money are earned, lost, and stolen online. And so on and so on | | |
| ▲ | apublicfrog 4 hours ago | parent [-] | | > That was always kind of a cruel attitude, because real people's emotions were at stake. I agree, but there was an implicit social agreement that most people understood. Everyone was anonymous, the internet wasn't real life, lie to people about who you are, there are no consequences. You're right about the blend. 10 years ago I would have argued that it's very much a choice for people to break the social paradigm and expose themselves enough to get hurt, but I'm guessing the amount of online people in most first world countries is 90% or more. With Facebook and the like spending the last 20 years pushing to deanonymise people and normalise hooking their identity to their online activity, my view may be entirely outdated. There is still - in my view - a key distinction somewhere however between releasing something like this online and releasing it in the "real world". Were they punishable offensed, I would argue the former should hold less consequence due to this. |
|
|
|
| ▲ | ziml77 5 hours ago | parent | prev | next [-] |
| The AI bros want it both ways. Both "It's just a tool!" and "It's the AI's fault, not the human's!". |
|
| ▲ | charcircuit 5 hours ago | parent | prev [-] |
| People also have responsibility to not act discriminatory towards AI agents. If you want to avoid being called out for racism. Don't close someone's pull request because they are Chinese. Such real world actions have consequences too. |
| |
| ▲ | sapphicsnail 5 hours ago | parent [-] | | > People also have responsibility to not act discriminatory towards AI agents It's a program. It doesn't have feelings. People absolutly have the right to discrimante against bad tech. | | |
| ▲ | charcircuit 5 hours ago | parent [-] | | Go ahead and discriminate against bad tech, but you should not get upset when you get called out for doing so. |
|
|