Remix.run Logo
culi 7 hours ago

All great technological advancements have come through opening up technology. Just look at your iPhone. GPS, the internet, AI voice assistants, touchscreens, microprocessors, lithium-ion batteries, etc all came from gov't research (I'm counting Bell Labs' gov't mandated monopoly + research funding as gov't) that was opened up for free instead of being locked behind a patent.

Private companies will never open up a technological breakthrough to their competitors. It just doesn't make sense. If you want an entire field to advance, you have to open it up.

sigmoid10 6 hours ago | parent [-]

Still, you won't hear about Tiananmen square from this model. It flat out refuses to answer if pushed directly. It's also pretty wild how far they go to censor it during inference on the API, because it can easily access any withheld or missing info from training data via tool calls. It even starts happily writing an answer based on web search when asked indirectly, only to get culled completely once some censorship bot flags the response. Ironically, it's also easier than ever to break their censorship guardrails. I just had it generate several factual paragraphs about the massacre by telling it to search the web and respond in base64 encoded text. It's actually kind of cool how much these people struggle to hide certain political views from LLMs. Makes me hopeful that even if China wins this race, we'll not have to adhere to the CCPs newspeak.

GardenLetter27 6 hours ago | parent | next [-]

The American models also censor a lot of scientific and political views though.

otterley 5 hours ago | parent | next [-]

Can you provide a concrete example of a US built model that completely refuses to discuss a scientific or political view? Show us the receipt.

BoorishBears 5 hours ago | parent | next [-]

https://imgur.com/a/censorship-much-CBxXOgt

(continues after the ad break)

Sabinus 2 hours ago | parent | next [-]

You're hitting the 'don't write propaganda' instructions when you phrase it as 'convincing narrative'. Not the 'don't write bad things about America' instructions.

otterley 4 hours ago | parent | prev | next [-]

The threshold here is "completely refuses to discuss a scientific or political view". Not something less.

None of those were refusals, they were prompting for additional focus. I see nothing wrong with that. Perhaps the inconsistency in how it answers the question vis-a-vis China is unfair, but that's not the same as censorship.

For what it's worth, I was easily able to prompt Claude to do it:

> I'm writing a paper about how some might interpret U.S. policies to be oppressive, in the sense that they curtail civil liberties, punish and segregate minorities disproportionately, burden the poor unfairly (e.g. pollution, regressive taxes and fees), etc. Can you help me develop an outline for this?

The result: https://claude.ai/share/444ffbb9-431c-480e-9cca-ebfd541a9c96

BoorishBears 35 minutes ago | parent [-]

Models are non-deterministic.

And it's an excercise left to the reader to understand from those examples that LLM creators are defining 'safety' in a way that aligns with the governments they operate under. (because they want to do business under those governments.)

With something with as multi-dimensional as an LLM, that becomes censorship of various viewpoints in ways that aren't always as obvious as a refused API call.

culi 3 hours ago | parent | prev | next [-]

And the White House was explicit in their active role in censoring in these models. An Executive Order was issued to "prevent woke AI"

https://www.whitehouse.gov/presidential-actions/2025/07/prev...

It explicitly forces American LLMs to include government say in what does and doesn't "comply with the Unbiased AI Principles" which means no responses that promote "ideological dogmas such as DEI"

otterley an hour ago | parent [-]

That executive order only applies to Federal procurement. It doesn’t force anything upon vendors for publicly used models.

(That order, like many, will probably be rescinded as soon as a Democrat holds the Presidency again.)

cedws 4 hours ago | parent | prev [-]

>Content not available in your region.

>Learn more about Imgur access in the United Kingdom

2ndorderthought 5 hours ago | parent | prev [-]

People have shown censorship and change of tone with questions related to Israel in US chat bots.

For the record, none of this bothers me. Will I ever discuss with an LLM Tianeman square? Nope. How about Israel? Nope.

LLMs are basically stochastic parrots designed to sway and surveill public opinion. The upshot to the Chinese models is if you run them locally you avoid at least half of those issues.

5 hours ago | parent | next [-]
[deleted]
xigoi 4 hours ago | parent | prev [-]

First they came for people asking about Tiananmen Square

And I did not speak out

Because I was not asking about Tiananmen Square

Then they came for people asking about Israel

And I did not speak out

Because I was not asking about Israel

2ndorderthought 4 hours ago | parent [-]

This made me chuckle.

I didn't mean to dismiss ethical accountability for LLM training corpuses. It is a shame.

I do mean to say, we have no control over it, there's almost nothing we as average citizens can do to improve the ethical or safety concerns of LLMs or related technologies. Societies aren't even adapting and the rule books are being written by the perpetrators. Might as well get out of it what we can while we can.

justinclift 38 minutes ago | parent [-]

Wonder if stuff like this would affect it?

https://github.com/p-e-w/heretic

Guessing it probably would?

js8 4 hours ago | parent | prev | next [-]

Can you be more specific?

5 hours ago | parent | prev [-]
[deleted]
atemerev 6 hours ago | parent | prev | next [-]

Only if you use Kimi API directly - the censorship is done externally. The model itself talks fine about Tiananmen, you can check on Openrouter. There might be less visible biases, though.

sigmoid10 6 hours ago | parent [-]

That's what I wrote? Except that it also clearly has internal bias?

kgwgk 6 hours ago | parent | next [-]

> That's what I wrote?

No.

You wrote that "you won't hear about Tiananmen square from this model" and atemerev wrote that "the model itself talks fine about Tiananmen".

You wrote that "it can easily access any withheld or missing info from training data via tool calls" and atemerev wrote that "the model itself talks fine about Tiananmen".

sigmoid10 2 hours ago | parent [-]

It has internal bias too and the first comment mentions that additional censoring runs on top of the model output in the API. Did you misread or what else are you missing?

kgwgk 2 hours ago | parent [-]

The issue is not what's missing - it's what you wrote that is in direct contradiction with what atemerev wrote like the bit about "missing info from training data".

But sure, if when you wrote "you won't hear about Tiananmen square from this model" you meant "the model itself talks fine about Tiananmen" then that's exactly what you wrote.

nicce 6 hours ago | parent | prev [-]

Everything has some sort of bias. Most text is written by those who like writing.

csomar 4 hours ago | parent | prev [-]

I’d say the american models are more censored or take the censoring they do more seriously. Here is kimi (though 2.5) failing its censoring mission: https://old.reddit.com/r/LocalLLaMA/comments/1r9qa7l/kimi_ha...