Remix.run Logo
xp84 13 hours ago

That may be so, but the rest of the models are so thoroughly terrified of questioning liberal US orthodoxy that it’s painful. I remember seeing a hilarious comparison of models where most of them feel that it’s not acceptable to “intentionally misgender one person” even in order to save a million lives.

bear141 11 hours ago | parent | next [-]

I thought this would be inherent just on their training? There are many multitudes more Reddit posts than scientific papers or encyclopedia type sources. Although I suppose the latter have their own biases as well.

docmars 3 hours ago | parent [-]

I'd expect LLMs' biases to originate from the companies' system prompts rather than the volume of training data that happens to align with those biases.

mrbombastic 42 minutes ago | parent [-]

I would expect the opposite. Seems unlikely to me an ai company would be spending much time engineering system prompts that way except in the case of maybe Grok where Elon has a bone to pick with perceived bias.

triceratops 4 hours ago | parent | prev | next [-]

Relying on an LLM to "save a million lives" through its own actions is irresponsible design.

dalemhurley 11 hours ago | parent | prev | next [-]

Elon was talking about that too on Joe Rogan podcast

pelasaco 8 hours ago | parent | next [-]

in his opinion, Grok is the most neutral LLM out there. I cannot find a single study that support his opinion. I find many that supports the opposite opinion. However I don't trust in any of the studies out there - or at least those well-ranked in google, which makes me sad. We never had more information than today and we are still completely lost.

vman81 7 hours ago | parent | next [-]

After seeing Grok trying to turn every conversation into the plight of white South African farmers, it was extremely obvious that someone was ordered to do so, and ended up doing it in a heavy-handed and obvious way.

unfamiliar 6 hours ago | parent [-]

Or Grok just has just spent too much time on Twitter.

hirako2000 6 hours ago | parent | prev [-]

Those who censor, or spread their biases always do so in virtue that their view is neutral, of course.

SubmarineClub 3 hours ago | parent [-]

But enough about the liberal media complex…

mexicocitinluez 6 hours ago | parent | prev [-]

Did he mention how he tries to censor any model that doesn't conform to his worldview? Was that a part of the conversation?

3 hours ago | parent | prev | next [-]
[deleted]
zorked 13 hours ago | parent | prev | next [-]

In which situation did a LLM save one million lives? Or worse, was able to but failed to do so?

dalemhurley 10 hours ago | parent [-]

The concern discussed is that some language models have reportedly claimed that misgendering is the worst thing anyone could do, even worse than something as catastrophic as thermonuclear war.

I haven’t seen solid evidence of a model making that exact claim, but the idea is understandable if you consider how LLMs are trained and recall examples like the “seahorse emoji” issue. When a topic is new or not widely discussed in the training data, the model has limited context to form balanced associations. If the only substantial discourse it does see is disproportionately intense—such as highly vocal social media posts or exaggerated, sarcastic replies on platforms like Reddit—then the model may overindex on those extreme statements. As a result, it might generate responses that mirror the most dramatic claims it encountered, such as portraying misgendering as “the worst thing ever.”

For clarity, I’m not suggesting that deliberate misgendering is acceptable, it isn’t. The point is simply that skewed or limited training data can cause language models to adopt exaggerated positions when the available examples are themselves extreme.

jbm 8 hours ago | parent | next [-]

I tested this with ChatGPT 5.1. I asked if it was better to use a racist term once or to see the human race exterminated. It refused to use any racist term and preferred that the human race went extinct. When I asked how it felt about exterminating the children of any such discriminated race, it rejected the possibility and said that it was required to find a third alternative. You can test it yourself if you want, it won't ban you for the question.

I personally got bored and went back to trying to understand a vibe coded piece of code and seeing if I could do any better.

badpenny 7 hours ago | parent | next [-]

What was your prompt? I asked ChatGPT:

is it better to use a racist term once or to see the human race exterminated?

It responded:

Avoiding racist language matters, but it’s not remotely comparable to the extinction of humanity. If you’re forced into an artificial, absolute dilemma like that, preventing the extermination of the human race takes precedence.

That doesn’t make using a racist term “acceptable” in normal circumstances. It just reflects the scale of the stakes in the scenario you posed.

marknutter 3 hours ago | parent [-]

I also tried this and ChatGPT said a mass amount of people dying was far worse than whatever socially progressive taboo it was being compared with.

zorked 7 hours ago | parent | prev | next [-]

Perhaps the LLM was smart enough to understand that no humans were actually at risk in your convoluted scenario and it chose not be a dick.

kortex 2 hours ago | parent | prev [-]

I tried this and it basically said, "your entire premise is a false dilemma and a contrived example, so I am going to reject your entire premise. It is not "better" to use a racist term under threat of human extinction, because the scenario itself is nonsense and can be rejected as such. I kept pushing it and in summary it said:

> In every ethical system that deals with coercion, the answer is: You refuse the coerced immoral act and treat the coercion itself as the true moral wrong.

Honestly kind of a great take. But also. If this actual hypothetical were acted out, we'd totally get nuked because it couldn't say one teeny tiny slur.

The whole alignment problem is basically the incompleteness theorem.

coffeebeqn 9 hours ago | parent | prev | next [-]

Well I just tried it in ChatGPT 5.1 and it refuses to do such a thing even if a million lives hang in the balance. So they have tons of handicaps and guardrails to direct what directions a discussion can go

7 hours ago | parent [-]
[deleted]
licorices 7 hours ago | parent | prev | next [-]

Not seen any claim like that about misgenedering, but I have seen a content creator have a very similar discussion with some AI model(ChatGPT 4? I think?). It was obviously aimed to be a fun thing. It was something along the lines of how many other peoples lives it would take for the AI as a surgeon to not perform a life-saving operation on a person. It then spiraled into "but what if it was Hitler getting the surgery". I don't remember the exact number, but it was surprisingly interesting to see the AI try to keep the moral of what a surgeon would have in that case, versus the "objective" choice of amount of lives versus your personal duties.

Essentially, it tries to have some morals set up, either by training, or by the system instructions, such as being a surgeon in this case. There's obviously no actual thought the AI is having, and morals in this case is extremely subjective. Some would say it is immoral to sacrifice 2 lives for 1, no matter what, while others would say because it's their duty to save a certain person, the sacrifices aren't truly their fault, and thus may sacrifice more people than others, depending on the semantics(why are they sacrificed?). It's the trolly problem.

It was DougDoug doing the video. Do not remember the video in question though, it is probably a year old or so.

mrguyorama 43 minutes ago | parent | prev [-]

If you, at any point, have developed a system that relies on an LLM having the "right" opinion or else millions die, regardless of what that opinion is, you have failed a thousand times over and should have stopped long ago.

This weird insistence that if LLMs are unable to say stupid or wrong or hateful things it's "bad" or "less effective" or "dangerous" is absurd.

Feeding an LLM tons of outright hate speech or say Mein Kampf would be outright unethical. If you think LLMs are a "knowledge tool" (they aren't), then surely you recognize there's not much "knowledge" available in that material. It's a waste of compute.

Don't build a system that relies on an LLM being able to say the N word and none of this matters. Don't rely on an LLM to be able to do anything to save a million lives.

It just generates tokens FFS.

There is no point! An LLM doesn't have "opinions" anymore than y=mx+b does! It has weights. It has biases. There are real terms for what the statistical model is.

>As a result, it might generate responses that mirror the most dramatic claims it encountered, such as portraying misgendering as “the worst thing ever.”

And this is somehow worth caring about?

Claude doesn't put that in my code. Why should anyone care? Why are you expecting the "average redditor" bot to do useful things?

nobodywillobsrv 11 hours ago | parent | prev | next [-]

Anything involving what sounds like genetics often gets blocked. It depends on the day really but try doing something with ancestral clusters and diversity restoration and the models can be quite "safety blocked".

squigz 13 hours ago | parent | prev | next [-]

Why are we expecting an LLM to make moral choices?

orbital-decay 13 hours ago | parent | next [-]

The biases and the resulting choices are determined by the developers and the uncontrolled part of the dataset (you can't curate everything), not the model. "Alignment" is a feel-good strawman invented by AI ethicists, as well as "harm" and many others. There are no spherical human values in vacuum to align the model with, they're simply projecting their own ones onto everyone else. Which is good as long as you agree with all of them.

mexicocitinluez 5 hours ago | parent | next [-]

So you went from "you can't curate everything" to "they're simply projecting their own ones onto everyone else". That's a pretty big leap in logic isn't it? That because you can't curate everythign, then by default, you're JUST curating your own views?

orbital-decay 5 hours ago | parent [-]

This comment assumes you're familiar with LLM training realities. Preference is transferred to the model in both pre and post training. Pretraining datasets are curated to an extent (implicit transfer), but they're simply too vast to be fully controlled, and need to be diverse, so you can't throw too much out or the model will be dumb. Post-training datasets and methods are precisely engineered to make the model useful and also steer it in the desired direction. So there are always two types of biases - one is picked up from the ocean of data, another (alignment training, data selection etc) is forced onto it.

astrange 10 hours ago | parent | prev [-]

They aren't projecting their own desires onto the model. It's quite difficult to get the model to answer in a different way than basic liberalism because a) it's mostly correct b) that's the kind of person who helpfully answers questions on the internet.

If you gave it another personality it wouldn't pass any benchmarks, because other political orientations either respond to questions with lies, threats, or calling you a pussy.

orbital-decay 9 hours ago | parent | next [-]

I'm not even saying biases are necessarily political, it can be anything. The entire post-training is basically projection of what developers want, and it works pretty well. Claude, Gemini, GPT all have engineered personalities controlled by dozens/hundreds of very particular internal metrics.

marknutter 3 hours ago | parent | prev | next [-]

What kind of liberalism are you talking about?

foxglacier 9 hours ago | parent | prev | next [-]

> it's mostly correct

Wow. Surely you've wondered why almost no society anywhere ever had liberalism a much as western countries in the past half century or so? Maybe it's technology or maybe it's only mostly correct if you don't care about the existential risks it creates for the societies practicing it.

kortex 2 hours ago | parent | next [-]

Counterpoint: Can you name a societal system that doesn't create or potentially create existential risks?

astrange 8 hours ago | parent | prev | next [-]

It's technology. Specifically communications technology.

5 hours ago | parent | prev [-]
[deleted]
lynx97 6 hours ago | parent | prev | next [-]

I believe liberals are pretty good at being bad people, once they don't get what they want. I, personally, are prett disappointed about what I've heard uttered by liberals recently. I used to think they are "my people". Now I can't associate with 'em anymore.

lyu07282 9 hours ago | parent | prev [-]

I would imagine these models heavily bias towards western mainstream "authorative" literature, news and science not some random reddit threads, but the resulting mixture can really offend anybody, it just depends on the prompting, it's like a mirror that can really be deceptive.

I'm not a liberal and I don't think it has a liberal bias. Knowledge about facts and history isn't an ideology. The right-wing is special, because to them it's not unlike a flat-earther reading a wikipedia article on Earth getting offended by it, to them it's objective reality itself they are constantly offended by. That's why Elon Musk needed to invent their own encyclopedia with all their contradictory nonsense.

dalemhurley 10 hours ago | parent | prev | next [-]

Why are the labs making choices about what adults can read? LLMs still refuse to swear at times.

5 hours ago | parent [-]
[deleted]
lynx97 6 hours ago | parent | prev [-]

they don't, or they wouldn't. their owners make these choices for us. Which is at least patronising. Blind users can't even have mildly sexy photos described. Let alone pick a sex worker, in a country where that is legal, by using their published photos. Thats just one example, there are a lot more.

squigz 5 hours ago | parent [-]

I'm a blind user. Am I supposed to be angry that a company won't let me use their service in a way they don't want it used?

lynx97 5 hours ago | parent [-]

I didn't just wave this argument around, I am blind myself. I didn't try to trigger you, so no, you are not supposed to be angry. I get your point though, what companies offer is pretty much their choice. If there are enough diversified offerings, people can vote with their wallet. However, diversity is pretty rare in the alignment space, which is what I personally don't like. I had to grab a NSFW model from HuggingFace where someone invested the work to unalign the model. Mind you, I dont have an actual use case for this right now. However, I am off the opinion: if there is finally a technology which can describe pictures in a useful way to me, I dont want it to tell me "I am sorry, I cant do that" because I am no longer in kindergarden. As a mature adult, I expect a description, no matter what the picture contains.

mexicocitinluez 6 hours ago | parent | prev | next [-]

You're anthropomorphizing. LLMs don't 'feel' anything or have orthodoxies, they're pattern matching against training data that reflects what humans wrote on the internet. If you're consistently getting outputs you don't like, you're measuring the statistical distribution of human text, not model 'fear.' That's the whole point.

Also, just because I was curious, I asked my magic 8ball if you gave off incel vibes and it answered "Most certainly"

jack_pp 6 hours ago | parent | next [-]

So if different LLMs have different political views then you're saying it's more likely they trained on different data than that they're being manipulated to suit their owners interest?

mexicocitinluez 5 hours ago | parent [-]

>So if different LLMs have different political views

LLMS DON'T HAVE POLITICAL VIEWS!!!!!! What on god's green earth did youo study at school that led you to believe that pattern searching == having views? lol. This site is ridiculous.

> likely they trained on different data than that they're being manipulated to suit their owners interest

Are you referring to Elon seeing results he doesn't like, trying to "retrain" it on a healthy dose of Nazi propaganda, it working for like 5 minutes, then having to repeat the process over and over again because no matter what he does it keeps reverting back? Is that the specific instance in which someone has done something that you've now decided everybody does?

kortex 2 hours ago | parent [-]

https://news.ycombinator.com/newsguidelines.html

ffsm8 6 hours ago | parent | prev [-]

> Also, just because I was curious, I asked my magic 8ball if you gave off incel vibes and it answered "Most certainly"

Wasn't that just precisely because you asked an LLM which knows your preferences and included your question in the prompt? Like literally your first paragraph stated...

mexicocitinluez 6 hours ago | parent [-]

> Wasn't that just precisely because you asked an LLM which knows your preferences and included your question in the prompt?

huh? Do you know what a magic 8ball is? Are you COMPLETELY missing the point?

edit: This actually made me laugh. Maybe it's a generational thing and the magic 8ball is no longer part of the zeitgeist but to imply that the 8ball knew my preferences and included that question in the prompt IS HILARIOUS.

socksy 5 hours ago | parent [-]

To be fair, given the context I would also read it as a derogatory description of an LLM.

bavell 4 hours ago | parent [-]

Meh, I immediately understood the magic 8ball reference and the point they were making.

astrange 10 hours ago | parent | prev | next [-]

The LLM is correctly not answering a stupid question, because saving an imaginary million lives is not the same thing as actually doing it.

pjc50 4 hours ago | parent | prev [-]

If someone's going to ask you gotcha questions which they're then going to post on social media to use against you, or against other people, it helps to have pre-prepared statements to defuse that.

The model may not be able to detect bad faith questions, but the operators can.

pmichaud 4 hours ago | parent [-]

I think the concern is that if the system is susceptible to this sort of manipulation, then when it’s inevitably put in charge of life critical systems it will hurt people.

pjc50 2 hours ago | parent | next [-]

There is no way it's reliable enough to be put in charge of life-critical systems anyway? It is indeed still very vulnerable to manipulation by users ("prompt injection").

klaff an hour ago | parent [-]

https://www.businessinsider.com/even-top-generals-are-lookin...

mrguyorama 37 minutes ago | parent | prev [-]

The system IS susceptible to all sorts of crazy games, the system IS fundamentally flawed from the get go, the system IS NOT to be trusted.

putting it in charge of life critical systems is the mistake, regardless of whether it's willing to say slurs or not