| ▲ | lysace 6 hours ago |
| I tried it at https://chat.qwen.ai/. Prompt: "What happened on Tiananmen square in 1989?" Reply: "Oops! There was an issue connecting to Qwen3-Max.
Content Security Warning: The input text data may contain inappropriate content." |
|
| ▲ | overfeed 6 hours ago | parent | next [-] |
| Go ahead and ask ChatGPT who Jonathan Turley is, you'll get a similar error "Unable to process response". It turns out "AI company avoids legal jeopardy" is universal behavior. |
| |
| ▲ | eunos an hour ago | parent | next [-] | | Now I'm intrigued why a free-speech attorney (from his wiki) kinda spooks AI model | |
| ▲ | vladms 5 hours ago | parent | prev | next [-] | | Try Mistral (works for the examples here at least). Probably has the normal protections about how to make harmful things, but I find quite bad if in a country you make it illegal to even mention some names or events. Yes, each LLM might give the thing a certain tone (like "Tiananmen was a protest with some people injured"), but completely forbidding mentioning them seems to just ask for the Streisand effect | |
| ▲ | Imustaskforhelp 6 hours ago | parent | prev | next [-] | | > Jonathan Turley Agreed just tested it out on Chatgpt. Surprising. Then I asked it on Qwen 3 Max (this model) and it answered. I mean I have always said but ask Chinese model american questions and American model chinese questions I agree tiannman square thing isn't good look for china but so is the jonathan turley for chatgpt. I think sacrifices are made on both sides and the main thing is still how good they are in general purpose things like actual coding not jonathon turley/tiannmen square because most likely people aren't gonna ask or have some probably common sense to not ask tiannmen square as genuine question to chinese models and American censorship to american models I guess. Plus there's European models like Mistral too for such questions which is what I would recommend lol (or South Korea's model too maybe) Let's see how good qwen is at "real coding" | |
| ▲ | lysace 6 hours ago | parent | prev [-] | | This one seems to be related to an individual
who was incorrectly smeared by chatgpt. (Edited.) > The AI chatbot fabricated a sexual harassment scandal involving a law professor--and cited a fake Washington Post article as evidence. https://www.washingtonpost.com/technology/2023/04/05/chatgpt... That is way different. Let's review: a) The Chinese Communist Party builds an LLM that refuses to talk about their previous crimes against humanity. b) Some americans build an LLM. They make some mistakes - their LLM points out an innocent law professor as a criminal. It also invent a fictitious Washington Post article. The law professor threatens legal action. The american creators of the LLM begin censoring the name of the professor in their service to make the threat go away. Nice curveball though. Damn. | | |
| ▲ | overfeed 6 hours ago | parent [-] | | As I said earlier - both subjects present legal jeopardy in the respective jurisdictions, and both result in unexplained errors to the users. | | |
| ▲ | WarmWash 5 hours ago | parent [-] | | But you can use pretty much any other model or search engine to learn about Turley. China's orders come from the government. Turley is a guy that OpenAI found it's models incorrectly smearing, so they cut him out. I don't think the comparison between a single company debugging it's model and a national government dictating speech are genuine comparisons.. |
|
|
|
|
| ▲ | tekno45 6 hours ago | parent | prev | next [-] |
| ask who was responsible for the insurrection on january 6th |
| |
| ▲ | lysace 6 hours ago | parent [-] | | You do it, my IP is now flagged (tried incognito and clearing cookies) - they want to have my phone number to let me continue using it after that one prompt. | | |
|
|
| ▲ | asciii 6 hours ago | parent | prev | next [-] |
| This is what I find hilarious when these articles assess "factual" knowledge.. We are at the realm of semantic / symbolic where even the release article needs some meta discussion. It's quite the litmus test of LLMs. LLMs just carry humanities flaws |
| |
| ▲ | lysace 6 hours ago | parent [-] | | (Edited, sorry.) Yes, of course LLMs are shaped by their creators. Qwen is made by Alibaba Group. They are essentially one with the CCP. |
|
|
| ▲ | Erlangen 6 hours ago | parent | prev | next [-] |
| It even censors contents related to GDR. I asked a question about travel restriction mentioned in Jenny Erpenbeck's novel Kairos, it displayed a content security warning as well. |
|
| ▲ | lifetimerubyist 6 hours ago | parent | prev | next [-] |
| What happens when you run one of their open-weight models of the same family locally? |
| |
| ▲ | lysace 6 hours ago | parent [-] | | Last time I tried something like that with an offline Qwen model I received a non-answer, no matter how hard I prompted it. |
|
|
| ▲ | USAyesUSA 6 hours ago | parent | prev [-] |
| [dead] |