Remix.run Logo
arter45 a day ago

A big part of the legal implications of LLMs and AI in general is about accountability.

If you are treated by a human being and it goes sideways, you could sue them and/or the hospital. Now, granted, you may not always win, it may take some time, but there is some chance.

If you are "treated" by an LLM and it goes sideways, good luck trying to sue OpenAI or whoever is running the model. It's not a coincidence that LLM providers are trying to put disclaimers and/or claims in their ToS that LLM advice is not necessarily good.

Same goes for privacy. Doctors and hospital are regulated in a way that you have a reasonable, often very strong, expectation of privacy. Consider doctor-patient confidentiality, for example. This doesn't mean that there is no leak, but you can hold someone accountable. If you send your medical data to ChatGPT and there is a leak, are you going to sue OpenAI?

The answer in both cases is, yes, you should probably be able to sue an LLM provider. But because LLM providers have a lot of money (way more than any hospital!), are usually global (jurisdiction could be challenging) and, often, they say themselves that LLM advice is not necessarily good (which doctors cannot say that easily), you may find that way more challenging than suing a doctor or a hospital.