| ▲ | jjk166 2 hours ago | |
> Blocking (or more accurately: restricting) access works pretty well for many other things that we know will be used in ways that are harmful. Historically, just having to go in person to a court house and request to view records was enough to keep most people from abusing the public information they had. If all you care about is preventing the information from being abused, preventing it from being used is a great option. This has significant negative side effects though. For court cases it means a lack of accountability for the justice system, excessive speculation in the court of public opinion, social stigma and innuendo, and the use of inappropriate proxies in lieu of good data. The fact that the access speedbump which supposedly worked in the past is no longer good enough is proof that an access speedbump is not a good way to do it. Let's say we block internet access but keep in person records access in place. What's to stop Google or anyone else from hiring a person to go visit the brick and mortar repositories to get the data exactly the same way they sent cars to map all the streets? And why are we making the assumption that AI training on this data is a net social ill? While we can certainly imagine abuses, it's not hard to imagine real benefits. > What do you think the "right way" to deal with the problem is because we already know that "hope that people choose to be better/smarter/more respectful" isn't going work. We've been dealing with people making bad decisions from data forever. As an example, there was red lining where institutions would refuse to sell homes or guarantee loans for minorities. Sometimes they would use computer models which didn't track skin color but had some proxy for it. At the end of the day you can't stop this problem by trying to hide what race people are. You need to explicitly ban that behavior. And we did. Institutions that attempt it are vulnerable to both investigation by government agencies and liability to civil suit from their victims. It's not perfect, there are still abuses, but it's so much better than if we all just closed our eyes and pretended that if the data were harder to get the discrimination wouldn't happen. If you don't want algorithms to come to spurious and discriminatory conclusions, you must make algorithms auditable, and give the public reasonable access to interrogate these algorithms that impact them. If an AI rejects my loan application, you better be able to prove that the AI isn't doing so based on my skin color. If you can do that, you should also be able to prove it's not doing so based off an expunged record. If evidence comes out that the AI has been using such data to come to such decisions, those who made it and those who employ it should be liable for damages, and depending on factors like intent, adherence to best practices, and severity potentially face criminal prosecution. Basically AI should be treated exactly the same as a human using the same data to come to the same conclusion. | ||