| ▲ | program_whiz 6 hours ago | |
There's a simple solution. If a medical malpractice happens, law suit against the LLM company. If their license is revoked as part of that finding, unfortunately that applies to the "doctor" (e.g. ChatGPT). Same for self-driving. Just hold each car like a normal driver, the owning AI company has liability. So after ~20 tickets and accidents in a week, a few ambulances being blocked, the only option is to revoke the driver's license (of which, all the cars share one, as they have the same brain). This would make AI companies more cautious and only advertise capabilities they actually have and can verify. They would be held to the standard of a human. I think that's reasonable (why replace humans if the outcome is worse, and why reduce protections for individuals). To make the analogy more clear: even if a telemedicine doc sees 10,000 patients a day all over the world, they would be held liable for any medical malpractice. Bad enough, and their license would be revoked, regardless of the fact that they see many patients all over the world. Same deal with AI / LLM -- if ChatGPT is making medical advice and it hurts someone, that's the same as a human doing so -- its malpractice and lawsuits can happen. If they are somehow licensed, well then that license can be revoked. We would revoke a human's license for a single offense in some cases, the same should occur with AI. | ||