| ▲ | AndrewKemendo a day ago | |||||||||||||||||||||||||
Sounds like a pretty efficient self correcting mechanism I’m not sure what the problem is there | ||||||||||||||||||||||||||
| ▲ | tikkabhuna a day ago | parent | next [-] | |||||||||||||||||||||||||
The problem is that destruction isn't contained to the company. If an AI agent exposes all company data and that includes PII or health information, that could have an impact on a large number of people. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | ben_w a day ago | parent | prev [-] | |||||||||||||||||||||||||
Normalisation of deviance is the problem: https://en.wikipedia.org/wiki/Normalization_of_deviance Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face. It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe. AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||