Remix.run Logo
lysace 6 hours ago

This one seems to be related to an individual who was incorrectly smeared by chatgpt. (Edited.)

> The AI chatbot fabricated a sexual harassment scandal involving a law professor--and cited a fake Washington Post article as evidence.

https://www.washingtonpost.com/technology/2023/04/05/chatgpt...

That is way different. Let's review:

a) The Chinese Communist Party builds an LLM that refuses to talk about their previous crimes against humanity.

b) Some americans build an LLM. They make some mistakes - their LLM points out an innocent law professor as a criminal. It also invent a fictitious Washington Post article.

The law professor threatens legal action. The american creators of the LLM begin censoring the name of the professor in their service to make the threat go away.

Nice curveball though. Damn.

overfeed 6 hours ago | parent [-]

As I said earlier - both subjects present legal jeopardy in the respective jurisdictions, and both result in unexplained errors to the users.

WarmWash 5 hours ago | parent [-]

But you can use pretty much any other model or search engine to learn about Turley.

China's orders come from the government. Turley is a guy that OpenAI found it's models incorrectly smearing, so they cut him out.

I don't think the comparison between a single company debugging it's model and a national government dictating speech are genuine comparisons..