Remix.run Logo
gs17 3 hours ago

> If this were true, why didn’t the chatbot immediately recognize that the word “Taiwan” should trigger the response?

Not recognizing they were outputting wrongthink until after it was being streamed to the user is a known behavior with some Chinese chatbot apps. A quick search found an example of DeepSeek doing it: https://www.reddit.com/r/OpenAI/comments/1ic3kl6/deepseek_ce...

I don't think his story is genuine, but it showing the "wrong" answer before correcting itself is known behavior.

EDIT: Here's an example of it outputting a full response about Taiwan specifically before removing it: https://www.reddit.com/r/interestingasfuck/comments/1i7ceol/...

recursive 2 hours ago | parent [-]

I've seen it from the non-Chinese ChatGPT before. Something was deemed to be violating the sensitivity filters or something, and it refused to answer. But only after I saw part of the real answer streamed to the output, and then redacted and replaced.