Remix.run Logo
herdcall 3 hours ago

The problem is that biases tend to be built in via even rudimentary stuff like bad training material and biased tuning via system prompts. E.g., consider the 2026 X post experiment, where a user ran identical divorce scenarios through ChatGPT but swapped genders. When a man described his wife's infidelity and abuse, the AI advised restraint to avoid appearing "controlling/abusive." For a woman in the same situation, it encouraged immediately taking the kids and car for "protection."