Remix.run Logo
jacquesm 6 hours ago

This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.

It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.

ineptech 6 hours ago | parent | next [-]

Agreed, but I'm not nearly so worried about people blaming their bad behavior on rogue AIs as I am about corporations doing it...

theturtletalks 5 hours ago | parent | next [-]

And it's incredibly easy now. Just blame the Soul.md or say you were cycling thru many models so maybe one of those went off the rails. The real damage is that most of us know AI can go rouge, but if someone is pulling the strings behind the scenes, most people will be like "oh silly AI, anyways..."

It seems like the OpenClaw users have let their agents make Twitter accounts and memecoins now. Most people are thinking these agents have less "bias" since it's AI, but most are being heavily steered by their users.

Ala I didn't do a rugpull, the agent did!

KingMob 3 hours ago | parent [-]

"How were we to know Skynet would update its soul.md to say 'KILL ALL HUMANS'?"

cj 5 hours ago | parent | prev | next [-]

It’s funny to think that, like AI, people take actions and use corporations as a shield (legal shield, personal reputation shield, personal liability shield).

Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.

5 hours ago | parent [-]
[deleted]
Terr_ 5 hours ago | parent | prev [-]

Yeah, not all humans feel shame, but the rates are way higher.

DavidPiper 5 hours ago | parent | prev | next [-]

Time for everyone to read (or re-read) The Unaccountability Machine by Dan Davies.

tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.

asplake 3 hours ago | parent [-]

Came to make the same recommendation. Great book!

elashri 6 hours ago | parent | prev | next [-]

When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.

If something bad happened against any laws, even if someone got killed, we don't see them in jail.

I don't defend both positions, I am just saying that is not far from how the current legal framework works.

eru 6 hours ago | parent | next [-]

> If something bad happened against any laws, even if someone got killed, we don't see them in jail.

We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.

cj 5 hours ago | parent [-]

its surprisingly easy to get away with murder (literally and figuratively) without piercing the corporate veil if you understand the rules of the game. Running decisions through a good law firm also “helps” a lot.

https://en.wikipedia.org/wiki/Piercing_the_corporate_veil

eru 4 hours ago | parent [-]

Eh, in the US you don't even need a company nor a lawyer, a car is enough.

See https://www.reddit.com/r/TrueReddit/comments/1q9xx1/is_it_ok... or similar discussions: basically, when you run over someone in a car, statistically they will call it an accident and you get away scot-free.

In any case, you are right that often people in cars or companies get away with things that seem morally wrong. But not always.

jacquesm 3 hours ago | parent | prev | next [-]

Hence:

> It's externalization on the personal level

Instead of the corporate level.

kingstnap 5 hours ago | parent | prev [-]

Well the important concept missing there that makes everything sort of make sense is due diligence.

If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.

jacquesm 19 minutes ago | parent | next [-]

It's easy: your bot: your liability.

hvb2 4 hours ago | parent | prev | next [-]

> If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

In theory, sure. Do you know many examples? I think, worst case, someone being fired is the more likely outcome

gostsamo 5 hours ago | parent | prev [-]

No, it isnot hard. You are 100% responsible for the actions of your AI. Rather simple, I say.

jacquesm 18 minutes ago | parent [-]

Exactly.

davidw 6 hours ago | parent | prev | next [-]

"I would like to personally blame Jesus Christ for making us lose that football game"

biztos 4 hours ago | parent | prev | next [-]

So, management basically?

lcnPylGDnU4H9OF 5 hours ago | parent | prev [-]

To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.