▲ | reactordev 2 days ago | ||||||||||||||||
I’m not denying we tried, are trying, and will try again… That we shouldn’t. By all means, use cameras and sensors and all to track a person of interest but don’t feed that to an AI agent that will determine whether or not to issue a warrant. | |||||||||||||||||
▲ | aspenmayer 2 days ago | parent [-] | ||||||||||||||||
If it’s anything like the AI expert systems I’ve heard about in insurance, it will be a tool that is optimized for low effort, but will be used carelessly by end users, which isn’t necessary the fault of the AI. In automated insurance claims adjustment, the AI writes a report to justify appealing patient care already approved by a human doctor that has already seen the patient in question, and then an actual human doctor working for the insurance company clicks an appeal button, after reviewing the AI output one would hope. AI systems with a human in the loop are supposed to keep the AI and the decisions accountable, but it seems like it’s more of an accountability dodge, so that each party can blame the other with no one party actually bearing any responsibility because there is no penalty for failure or error to the system or its operators. | |||||||||||||||||
|