Remix.run Logo
nerdsniper 13 hours ago

FWIW, in Walters v OpenAI, a judge rejected that argument made in OpenAI's motion to dismiss [0]. The case ended up being ruled on different merits though (namely, that the user knew the statements were a hallucination so there was no defamation).

> First, Riehl did not and could not reasonably read ChatGPT’s output as defamatory. By its very nature, AI-generated content is probabilistic and not always factual, and there is near universal consensus that responsible use of AI includes fact-checking prompted outputs before using or sharing them. OpenAI clearly and consistently conveys these limitations to its users. Immediately below the text box where users enter prompts, OpenAI warns: “ChatGPT may produce inaccurate information about people, places, or facts.” Before using ChatGPT, users agree that ChatGPT is a tool to generate “draft language,” and that they must verify, revise, and “take ultimate responsibility for the content being published.” And upon logging into ChatGPT, users are again warned “the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.”

Separately, it's broadly correct that there is no Section 230 argument to be made. "Everyone" knows that Section 230 doesn't apply to this. I can't find anyone making any legal arguments that it would.

0: https://storage.courtlistener.com/recap/gov.uscourts.gand.31...