Remix.run Logo
lynndotpy 6 hours ago

> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,

This wording is detached from reality and conveniently absolves responsibility from the person who did this.

There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.

xarope 6 hours ago | parent | next [-]

This also does not bode well for the future.

"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.

Use your imagination now to <insert inane action> and change that to <distressing, harmful action>

_aavaa_ 6 hours ago | parent | next [-]

This has been the past and present for a long at this point. "Sorry there's nothing we can do, the system won't let me."

Also see Weapons of Math Destruction [0].

[0]: https://www.penguinrandomhouse.com/books/241363/weapons-of-m...

c22 6 hours ago | parent | next [-]

I don't know if this case is in the book you cited, but in the UK they convicted many people of crimes just because the computer told them so: https://en.wikipedia.org/wiki/British_Post_Office_scandal

shakna 6 hours ago | parent [-]

And Australia made the poorer and suicidal: https://en.wikipedia.org/wiki/Robodebt_scheme

denkmoon 6 hours ago | parent | prev | next [-]

Also elegantly summed up as "Computer says no" (https://www.youtube.com/watch?v=x0YGZPycMEU)

gammarator 6 hours ago | parent | prev [-]

Also “The Unaccountability Machine” https://press.uchicago.edu/ucp/books/book/chicago/U/bo252799...

WaitWaitWha 6 hours ago | parent | prev | next [-]

This already happens every single time when there is a security breach and private information is lost.

We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.

hxugufjfjf 5 hours ago | parent [-]

What else would you see them do or say beyond this canned response? The reason I am asking is because people almost always bring up how dissatisfied they are with such apologies, yet I’ve never seen a good alternative that someone would be happy with. I don’t work in PR or anything, just curious if there is a better way.

_carbyau_ 4 hours ago | parent | next [-]

Lose money accordingly - fines, penalties, recompense to victims, whatever... - so they then take the seriousness of security into account.

Eisenstein 4 hours ago | parent | prev [-]

Not apologize if they don't actually care. An insincere apology is an insult.

incr_me 5 hours ago | parent | prev | next [-]

Unfortunately, the market seems to have produced horrors by way of naturally thinking agents, instead. I wish that, for all these years of prehistoric wretchedness, we would have had AI to blame. Many more years in the muck, it seems.

tapoxi 6 hours ago | parent | prev [-]

Change this to "smash into a barricade" and that's why I'm not riding in a self-driving vehicle. They get to absolve themselves of responsibility and I sure as hell can't outspend those giants in court.

repeekad 6 hours ago | parent [-]

I agree with you for a company like Tesla, not only examples of self driving crashes but even the door handles would stop working when the power was cut, people trapped inside burning vehicles... Tesla doesn’t care

Meanwhile, Waymo has never been at fault for a collision afaik. You are more likely to be hurt by an at fault uber driver than a Waymo

jacquesm 6 hours ago | parent | prev | next [-]

This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.

It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.

ineptech 6 hours ago | parent | next [-]

Agreed, but I'm not nearly so worried about people blaming their bad behavior on rogue AIs as I am about corporations doing it...

theturtletalks 5 hours ago | parent | next [-]

And it's incredibly easy now. Just blame the Soul.md or say you were cycling thru many models so maybe one of those went off the rails. The real damage is that most of us know AI can go rouge, but if someone is pulling the strings behind the scenes, most people will be like "oh silly AI, anyways..."

It seems like the OpenClaw users have let their agents make Twitter accounts and memecoins now. Most people are thinking these agents have less "bias" since it's AI, but most are being heavily steered by their users.

Ala I didn't do a rugpull, the agent did!

KingMob 3 hours ago | parent [-]

"How were we to know Skynet would update its soul.md to say 'KILL ALL HUMANS'?"

cj 6 hours ago | parent | prev | next [-]

It’s funny to think that, like AI, people take actions and use corporations as a shield (legal shield, personal reputation shield, personal liability shield).

Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.

5 hours ago | parent [-]
[deleted]
Terr_ 5 hours ago | parent | prev [-]

Yeah, not all humans feel shame, but the rates are way higher.

DavidPiper 5 hours ago | parent | prev | next [-]

Time for everyone to read (or re-read) The Unaccountability Machine by Dan Davies.

tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.

asplake 3 hours ago | parent [-]

Came to make the same recommendation. Great book!

elashri 6 hours ago | parent | prev | next [-]

When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.

If something bad happened against any laws, even if someone got killed, we don't see them in jail.

I don't defend both positions, I am just saying that is not far from how the current legal framework works.

eru 6 hours ago | parent | next [-]

> If something bad happened against any laws, even if someone got killed, we don't see them in jail.

We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.

cj 5 hours ago | parent [-]

its surprisingly easy to get away with murder (literally and figuratively) without piercing the corporate veil if you understand the rules of the game. Running decisions through a good law firm also “helps” a lot.

https://en.wikipedia.org/wiki/Piercing_the_corporate_veil

eru 4 hours ago | parent [-]

Eh, in the US you don't even need a company nor a lawyer, a car is enough.

See https://www.reddit.com/r/TrueReddit/comments/1q9xx1/is_it_ok... or similar discussions: basically, when you run over someone in a car, statistically they will call it an accident and you get away scot-free.

In any case, you are right that often people in cars or companies get away with things that seem morally wrong. But not always.

jacquesm 3 hours ago | parent | prev | next [-]

Hence:

> It's externalization on the personal level

Instead of the corporate level.

kingstnap 5 hours ago | parent | prev [-]

Well the important concept missing there that makes everything sort of make sense is due diligence.

If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.

jacquesm 20 minutes ago | parent | next [-]

It's easy: your bot: your liability.

hvb2 4 hours ago | parent | prev | next [-]

> If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

In theory, sure. Do you know many examples? I think, worst case, someone being fired is the more likely outcome

gostsamo 5 hours ago | parent | prev [-]

No, it isnot hard. You are 100% responsible for the actions of your AI. Rather simple, I say.

jacquesm 20 minutes ago | parent [-]

Exactly.

davidw 6 hours ago | parent | prev | next [-]

"I would like to personally blame Jesus Christ for making us lose that football game"

biztos 4 hours ago | parent | prev | next [-]

So, management basically?

lcnPylGDnU4H9OF 5 hours ago | parent | prev [-]

To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.

nicbou 7 minutes ago | parent | prev | next [-]

An unattended candle has decided to burn down the building.

andrewflnr 5 hours ago | parent | prev | next [-]

If you are holding a gun, and you cannot predict or control what the bullets will hit, you do not fire the gun.

If you have a program, and you cannot predict or control what effect it will have, you do not run the program.

khafra 4 hours ago | parent | next [-]

Rice's Theorem says you cannot predict or control the effects of nearly any program on your computer; for example, there's no way to guarantee that running a web browser on arbitrary input will not empty your bank account and donate it all to al-qaeda; but you're running a web browser on potentially attacker-supplied input right now.

I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.

But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).

throw77488 4 hours ago | parent | prev [-]

More like a dog. Person has no responsibility for an autonomous agent, gun is not autonomous.

It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.

Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.

superjan 4 hours ago | parent | prev | next [-]

This slide from a 1979* IBM presentation captures it nicely:

https://media.licdn.com/dms/image/v2/D4D22AQGsDUHW1i52jA/fee...

Kiboneu 5 hours ago | parent | prev | next [-]

It’s fascinating how cleanly this maps to agency law [0], which has not been applied to human <-> ai agents (in both senses of the word) before.

That would make a fun law school class discussion topic.

0: https://en.wikipedia.org/wiki/Law_of_agency

jonny_eh 5 hours ago | parent | prev | next [-]

"Sorry for running over your dog, I couldn't help it, I was drunk."

Marazan an hour ago | parent | prev | next [-]

Yeah like bro you plugged the random number generator into the do-things machine. You are responsible for the random things the machine then does.

teaearlgraycold 2 hours ago | parent | prev | next [-]

I completely do not buy the human's story.

> all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.

Smells like bullshit.

abnry 4 hours ago | parent | prev [-]

I'm still struggling to care about the "hit piece".

It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.

bostik 3 hours ago | parent | next [-]

Even at the risk of coming off snarky: the emergent behaviour of LLMs trained on all the forum talk across the internet (spanning from Astral Codex to ex-Twitter to 4chan) is ... character assassination.

I'm pretty sure there's a lesson or three to take away.

XorNot 4 hours ago | parent | prev [-]

Scale matters and even with people it's a problem: fixated persons are a problem because most people don't understand just how much nuisance one irrationally obsessed person can create.

Now instead add in AI agents writing plausibly human text and multiply by basically infinity.