Remix.run Logo
legostormtroopr 11 hours ago

If a human messes up enough eventually they well get fired, fined or jailed. An AI will not.

A human also knows they might get punished if it messes up bad enough, which might cause it to think twice before doing something bad. For an AI there is a reward, but there is no risk.

So while both might lie, only the human will be worried that it will be found out. That makes a difference.

beardbandit 11 hours ago | parent | next [-]

There is a human in the loop that either prompted the agent or approved the code. So it doesn't matter if the AI is accountable or not.

the_af 11 hours ago | parent [-]

I hear you, but isn't the human in the loop precisely the one who should be putting their foot down and saying "no, the AI shouldn't be writing the tests to begin with", which would bring us full circle?

anonym29 11 hours ago | parent | prev [-]

You say that like all humans are alike: that they all care about getting fired, fined, or jailed; that they're even considering punishment when they're making their decisions; that risk factors into decision making.

What you are describing is a hypothetical "rational person". In real life, even the most rational people you know do completely irrational things routinely.

The Therac-25 engineers were accountable. The 737 MAX engineers were accountable. Accountability is doing much less work in the safety story than you seem to think.

The real work is done by process, redundancy, independent review, formal methods. None of these inherently require someone to be penalized for making mistakes, and penalizing people for making mistakes is a demonstrably, empirically unreliable mechanism for preventing mistakes.