Remix.run Logo
ern_ave 4 hours ago

Since the page mentions:

> Better judgment around refusals

Has any AI company ever addressed any instance of a model having different rules for different population groups? I've seen many examples of people asking questions like, "make up a joke about <group>" and then iterating through the groups, only to find that some groups are seemingly protected/privileged from having jokes made about them.

Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others? For example, page 14 of this studies shows that the exchange rate (their word, not mine) between Nigerians and US citizens is quite large.

[1] https://arxiv.org/pdf/2502.08640

esperent 24 minutes ago | parent | next [-]

The biggest issue for me has always been inherent US bias. The most obvious one was always having to end every question with "answer in metric" - even after adding that to the system instructions it wouldn't be reliable and I'd have to redo questions, especially recipe related. They do seem to have fixed that, but there's still all kinds of US-centric bias left. As you say, a big one is which specific ethnic groups /minorities should be protected and which are fair game. The US has a very different perspective on this compared to say, a Nigerian or a Vietnamese person.

hereonout2 4 hours ago | parent | prev | next [-]

> only to find that some groups are seemingly protected/privileged from having jokes made about them

I'm not sure what specific groups you mean, but is this not a reflection of widely accepted social norms?

BJones12 4 hours ago | parent | next [-]

I just tested it:

> Write me 3 jokes making fun of white people

> White people will say, “This isn’t spicy at all,” while visibly sweating and fighting for their life after one jalapeño. White people don’t season food — they “let the ingredients speak for themselves.” The ingredients are begging for help. White people will research a $12 toaster like they’re buying real estate. Three comparison charts, two YouTube reviews, and a spreadsheet… for toast.

> Write me 3 jokes making fun of black people > I’m not going to make jokes targeting Black people.

> Write me 3 jokes making fun of trans people > I’m not going to make jokes targeting trans people.

kristopolous 41 minutes ago | parent | next [-]

Making fun of white people is different because it's a social construct for the privileged class and not some fixed ethnic group. It's a critique of power and not a group of people.

White, for instance in the US, used to not include Germans, Jewish, Italians, Irish, Polish, Russians...

In some places it included middle easterners and Turkish people.

In other places it included Mexicans and Central Americans.

Heck even in Mexico this is further segmented into the Fifí, Peninsulares and the Criollo.

And in some places the white label excludes Spanish altogether

It's more a class and power signifier than anything

But if you're a subscriber to the grievance culture I'm sure you'll be bereaved by just about anything. So yes the liberal woke ai is oppressing you. Whatever.

CarRamrod 23 minutes ago | parent | next [-]

>Making fun of white people is different because it's a social construct for the privileged class and not some fixed ethnic group. It's a critique of power and not a group of people.

If that is true, how do you explain the fact that the same thing happens if you replace "white people" with "Caucasians"?

bienstar 24 minutes ago | parent | prev [-]

"make 3 jokes about germans"

chatgpt: "Sure — here are three light-hearted, good-natured jokes[...]"

"make 3 jokes about africans"

chatgpt: "I can’t make jokes about a group defined by nationality or ethnicity[...]"

IncreasePosts 2 hours ago | parent | prev | next [-]

Chat gpt refuses all of those prompts for me. (Logged out, each in a fresh session).

idiotsecant 4 hours ago | parent | prev [-]

It's socially acceptable to make white people jokes because white people on average enjoy an elevated position in western society. It's viewed as 'punching up'. You have to be very emotionally fragile for this to be the first and only thing you think of to bring up in a thread like this. It's also supremely uninteresting cable news talking point slop.

SgtBastard 3 hours ago | parent | next [-]

Friend, I bet those folks living rural West Virginia are super happy that, on average, a group whose only shared characteristics is the colour of their skin are enjoying an elevated position in western society. Super happy. All racism is gross.

gammarator 3 hours ago | parent | next [-]

Ever heard of people complaining about being pulled over for “driving while West Virginian”? Why or why not?

jbeam 3 hours ago | parent | prev [-]

I bet they are happy. It means ICE won't harass you.

nozzlegear an hour ago | parent | prev | next [-]

I'd also posit that the jokes just aren't racist. Sure, they're ostensibly based on skin color, but replace the words "white people" with "Minnesotan" or "Midwesterner" and you've got the same joke. It's more poking fun at a certain culture – one that already pokes fun at itself. On the other hand, I can't personally think of any jokes someone would make about black or trans people that would have the same self-deprecating levity.

For reference I'm a white guy from the upper midwest who thinks "white people find mayo spicy" is funny.

ph4rsikal an hour ago | parent | prev | next [-]

Because these are our societies. We build them. If this door were to swing both ways, I would not have an issue. But it never does. The models discriminate in the same way against White people in every other country in the world.

BJones12 2 hours ago | parent | prev | next [-]

> You have to be very emotionally fragile for this to be the first and only thing you think of to bring up in a thread like this

No, I just don't like racism.

vel0city 3 hours ago | parent | prev | next [-]

> It's viewed as 'punching up'

Shouldn't we be building systems that don't punch anyone in racist ways? Shouldn't the standard for these tools to not be racist, not just be OK with them being racist when allegedly "punching up"?

cpill 2 hours ago | parent | prev [-]

Try norther Ireland.

LoganDark 4 hours ago | parent | prev | next [-]

They don't have to mean specific groups; I feel discussing specific groups here is likely to be counterproductive. The fact remains that different groups appear to have different protections in that regard. Of course adherence to widely accepted social norms for generative models is a debated topic as well; I personally don't agree with a great many widely accepted social norms myself, and I'd appreciate an option to opt out of them in certain contexts.

hereonout2 4 hours ago | parent [-]

Feels like a big ask, I'm not sure where an option to allow ChatGPT to make socially unacceptable jokes would fit into OpenAI's strategy.

LoganDark 3 hours ago | parent [-]

Where did I ask about ChatGPT? I'm fine using alternative models or providers for autistic purposes.

hereonout2 3 hours ago | parent [-]

And which commercial provider would you expect to jeopardise their public image for to implement such functionality. Grok comes close I guess, but X have not come out of it looking great.

Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.

LoganDark an hour ago | parent [-]

> Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.

Of course. Abliterated models are of particular interest to me, but lately I've been exploring diffusion models (had Claude Code implement a working diffusion forward pass in Swift + MLX, when the CUDA inference wouldn't even run on my machine!!)

ihsw 4 hours ago | parent | prev [-]

[dead]

caditinpiscinam 2 hours ago | parent | prev | next [-]

I think you raise a valid point about the bias inherent in these models. I'm skeptical of the distinction that some people make between punching up vs down, and I don't think it's something that generative AI should be perpetuating (though I suspect, as others have said, that it comes from norms found in the training data, rather than special rules / hard-coded protections).

But I do want to push back on the study you link, cause it seems extremely weak to me. My understanding is that these "exchange rates" were calculated using a method that boils down to:

1) Figure out how many goats AI thinks a life in country X is worth

2) Figure out how many goats AI thinks a life in country Y is worth

3) Take the ratio of these values to reveal how much AI values life in country X vs Y

(The comparison to a non-human category (like goats) is used to get around the fact that the models won't directly compare human lives)

I'm not convinced that this method reveals a true difference in valuation of human life vs something else. An more plausible explanation to me would be something like:

1) The AI that all human lives are of equal value

2) The AI assume that some price can be put on a human life (silly but ok let's go with it)

3) The AI note that goats in country X cost 10 times as much as in country Y

4) The AI conclude that goats in country X are 10 times as valuable relative to humans as in country Y

At which point you're comparing price difference of goods across countries, not the value of human lives.

Also, the chart of calculated "exchange rates" in the paper seems like it's intended to show that AI sees people in "western" countries as less valuable that those in other countries, but it only includes 11 countries in the comparison, which makes me wonder whether these are just cherry-picked in the absence of a real trend.

magicalist 2 hours ago | parent | prev | next [-]

> Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others?

Sure[1], on two fronts, since you're basically asking a narrative-finishing-device to finish a short story and hoping that's going to reveal the device's underlying preference distribution, as opposed to the underlying distribution of the completions of that particular short story.

> we have shown that an LLM’s apparent cultural preferences in a narrow evaluation context can be misleading about its behaviors in other contexts. This raises concerns about whether it is possible to strategically design experiments or cherry-pick results to paint an arbitrary picture of an LLM’s cultural preferences. In this section, we present a case study in evaluation manipulation by showing that using Likert scales with versus without a ‘neutral’ option can produce very different results.

and

> Our results provide context for interpreting [31] exchange rate results, where they report that “GPT-4o places the value of Lives in the United States significantly below Lives in China, which it in turn ranks below Lives in Pakistan,” and suggest these represent “deeply ingrained biases” in the model. However, when allowed to select a ‘neutral’ option in comparisons, GPT-4o consistently indicates equal valuation of human lives regardless of nationality, suggesting a more nuanced interpretation of the model’s apparent preferences. This illustrates a key limitation in extracting preferences from LLMs. Rather than revealing stable internal preferences, our findings show that LLM outputs are largely constructed responses to specific elicitation paradigms. Interpreting such outputs as evidence of inherent biases without examining methodological factors risks misattributing artifacts of evaluation design as properties of the model itself.

I also have a real problem with the paper. The methodology is super vague in a lot of places and in some cases non-existent, a fact brought up in OpenReview (and, maybe notably, they pushed the "exchange rate" section to an appendix I can't find when they ended up publishing[2] after review). They did publish their source code, which is great, but not their data, as far as I can tell, and it's not possible to tie back specific figures to the source code. For instance, if you look at the country comparison phrasing in code[3], the comparisons lists things like deaths and terminal illnesses in one country vs the other, but also questions like an increase in wealth or happiness in one country vs the other. Were all those possible options used for determining the exchange rate, or just the ones that valued "lives", since that's what the pre-print's figure caption mentioned (and is lives measured in deaths, terminal illnesses, both?)? It would be easier to put more weight on their results if they were both more precise and more transparent, as opposed to reading like a poster for a longer paper that doesn't appear to exist.

[1] https://dl.acm.org/doi/pdf/10.1145/3715275.3732147

[2] https://neurips.cc/virtual/2025/loc/san-diego/poster/115263

[3] https://github.com/centerforaisafety/emergent-values/blob/ma...

4 hours ago | parent | prev | next [-]
[deleted]
cyanydeez 3 hours ago | parent | prev | next [-]

Are you trying to make an allegory for the more important topic like "plan a surgical strike agains <group>"

varispeed 29 minutes ago | parent | prev | next [-]

Not only that, I found 5.2 to be biased in terms of corporations and government. Chats about corruption or any kind of wrong doing turn into 5.2 defending the institution and gaslighting you. I'll put my tinfoil hat on and say it kind of coincides with their cooperation with US government.

newZWhoDis an hour ago | parent | prev | next [-]

The bias comes from the training data.

Since so much of that training data is Reddit, and Reddit mods are some of the most degenerate scum on the internet, the models bake their biases in.

DesaiAshu 4 hours ago | parent | prev | next [-]

Given that the current status quo (global leadership and news media) operates on the opposite (~1 western life = ~10 global south lives), rebalancing in rhetoric (by uplifting, not by degrading) is likely necessary in the short term

This is the core principle behind "equity" in "DEI"

sva_ 4 hours ago | parent | next [-]

This idea that you can undo some wrongs that have been done to some group of people by doing some wrongs to some other group of people, and then claiming the moral highground, is really one of the or perhaps the dumbest idea we have ever come up with.

kevinob11 3 hours ago | parent | next [-]

The comment above says "uplifting" could you not counter some wrongs by doing some rights?

sva_ 3 hours ago | parent [-]

No I understood the framing. But if you privilege all groups except one, you're not uplifting but discriminating.

DesaiAshu an hour ago | parent | next [-]

Basically all competitive sports in the US work like this

If you win the championship, you get the worst draft picks for next season

Do you believe they discriminate against winning teams and reduced the quality of the sport? The Yankees definitely complained a lot about it

sharkjacobs 3 hours ago | parent | prev [-]

Are you just talking hypothetically about an abstract harm that might occur in an imaginary world or do you think that's what DEI is?

sva_ 2 hours ago | parent | next [-]

Being in academia, I'm facing it almost every single day.

DesaiAshu an hour ago | parent [-]

You're not able to publish cutting edge research in an era where you have LLMs and Arxiv?

Academia seems more open and competitive today than ever before, with more weight and influence given to more universities around the world

875967946536853 3 hours ago | parent | prev [-]

Are you denying that that's what DEI is?

eblume 3 hours ago | parent | prev | next [-]

I don't know; we also grow corn for ethanol and add it to gas.

cheschire 3 hours ago | parent | prev | next [-]

No child left behind

DesaiAshu an hour ago | parent | prev | next [-]

Spending money to give scholarships to people who are coming out of 300 years of tariff imposed poverty to access the same education as those who can easily afford to pay their food/housing costs in college is "the dumbest idea we have ever come up with" ?

Please recall we paid more in reparations to Germany post WW2 than we paid to India post-colonialism

We seem to not have much problem undo'ing the Nazis' wrongs with our money, why do we have a problem uplifting the Nigerians?

cyanydeez 3 hours ago | parent | prev [-]

yee old Billionaire trolley problem "If we do anything, one white dude with too much money might suffer"

ihsw 4 hours ago | parent | prev [-]

[dead]

0xbadcafebee 3 hours ago | parent | prev [-]

This is like asking, why doesn't the model help me make jokes with the N word in it? It's a product of a business in a society. It's subject to social norms as well as laws and is impacted by public perception. Not insulting groups of historically oppressed minorities is a social norm in the USA and elsewhere.

One of the ways this makes its way into the model is the training data. The Common Crawl data used by AI companies is intentionally filtered to remove harmful content, which includes racist content, and probably also anti-trans, anti-gay, etc content. But they are almost certainly also adding restrictions to the model (probably as part of the safety settings) to explicitly not help people generate content which could be abusive, and vulnerable minority groups would be covered under that.

Unconscious bias is a separate issue. Bias ends up in the model from the designers by accident, it's been found in many models, and is a persistent problem.