Remix.run Logo
api 6 hours ago

It's not an AI error. It's a human error in mis-using AI in this way. Saying it's an AI error is like saying a hole in your drywall is a hammer error.

Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.

munk-a 6 hours ago | parent | next [-]

It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.

There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.

api 5 hours ago | parent [-]

I worry that blaming AI at all actually incentivizes humans to offload things to AI that should not be offloaded, since it lets them escape blame.

munk-a 4 hours ago | parent [-]

That is a huge danger. Legally speaking it's not an issue since misusing a tool doesn't relieve liability (in most circumstances - all the trivial ones at least)... but that's a more significant political issue as evidenced by the Anthropic vs. DoD interactions since the DoD's actions are largely immune to oversight by the justice system.

Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.

For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.

beej71 an hour ago | parent | prev [-]

We should probably stop telling the cops that this hammer is great for drywall.