▲ | throw10920 4 days ago | |
Genuine answer: the model has been trained by companies that are required by law to censor them to conform to PRC CCP party lines, including rejection of consensus reality such as Tiananmen Square[1]. Yes, the censorship for some topics currently doesn't appear to be any good, but it does exist, will absolutely get better (both harder to subvert and more subtle), and makes the models less trustworthy than those from countries (US, EU, Sweden, whatever) that don't have that same level of state control. (note that I'm not claiming that there's no state control or picking any specific other country) That's the downside to the user. To loop that back to your question, the upside to China is soft power (the same kind that the US has been flushing away recently). It's pretty similar to TikTok - if you have an extremely popular thing that people spend hours a day on and start to filter their life through, and you can influence it, that's a huge amount of power - even if you don't make any money off of it. Now, to be fair to the context of your question, there isn't nearly as much soft power you can get from a model that people use primarily for coding - that I'm less concerned about. [1] https://www.tomsguide.com/ai/i-just-outsmarted-deepseeks-cen... | ||
▲ | criley2 4 days ago | parent [-] | |
As a counterpoint: Using a foreign model means the for-domestic-consumption censorship will not effect you much. Qwen is happy to talk about MAGA, slavery, the Holocaust, or any other "controversial" western topic. However, American models (just like Chinese models) are heavily censored according to the society. ChatGPT, Claude, Gemini, are all aggressively censored to meet western expectation. So in essence, Chinese models should be less censored than western models for western topics. |