| ▲ | ben_w a day ago | ||||||||||||||||
Normalisation of deviance is the problem: https://en.wikipedia.org/wiki/Normalization_of_deviance Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face. It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe. AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is. | |||||||||||||||||
| ▲ | AndrewKemendo a day ago | parent [-] | ||||||||||||||||
That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures Further, it’s only a problem to the extent that the downsides or risks are not accounted for which again… is a social problem not a technological problem This isn’t a problem for organizations that have well aligned incentives across their workflows A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them. They are then attributing the pain in dealing with that organization to the technology rather than the misaligned incentives | |||||||||||||||||
| |||||||||||||||||