Remix.run Logo
mxwsn 5 days ago

AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.

edit: ability without accountability is the catchier motto :)

adriand 5 days ago | parent | next [-]

This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.

dsign 4 days ago | parent | prev | next [-]

Correct.

This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.

To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.

simianwords 5 days ago | parent | prev | next [-]

This statement is a vague and hollow and doesn't pass my sniff test. All technologies have moved accountability one layer up - they don't remove it completely.

Why would that be any different with AI?

leeoniya 5 days ago | parent | prev | next [-]

i've also made this argument.

would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.

bbqfog 4 days ago | parent [-]

I would. If something has proven results, it won't matter to me if a human is in the loop or not. Waymo has worked great for me for instance.

leeoniya 2 days ago | parent [-]

Waymo itself was not designed, implemented, and shipped by AI.

i suspect humans had to invest millions of hours into writing the code, the tests, and validating the outputs.

bbqfog 2 days ago | parent [-]

It's "designing" the way it gets me to the destination without a human in the loop and I'm not bothered by that at all.

ares623 5 days ago | parent | prev | next [-]

Removing accountability is a feature

ScotterC 5 days ago | parent | prev | next [-]

I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.

bbqfog 4 days ago | parent | prev [-]

That's just a side effect of toxic work environments. If AI can create value, someone will use it to create value. If companies won't use AI because they can't blame it when their boss yells at them, then they also won't capture that value.