▲ | Karawebnetwork 2 days ago | |||||||||||||||||||||||||||||||
Important follow-up page to the US AI Action Page: "PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT" https://www.whitehouse.gov/presidential-actions/2025/07/prev... > In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI. | ||||||||||||||||||||||||||||||||
▲ | nickpsecurity 2 days ago | parent [-] | |||||||||||||||||||||||||||||||
It's worth mentioning because the AI developers have been using alignment training to make AI's see the world through the lens of intersectionality. That ranges from censoring what those philosophies would censor to simply presenting answers like they would. Some models actually got dumber as they prioritized indoctrination as "safety" training. It appears that many employees in the companies think that way, too. Most of the world, and a huge chunk of America, thinks in different ways. Many are not aware the AI's are being built this way either. So, we want AI's that don't have a philosophy opposite of ours. We'd like them to either be more neutral or customizable to the users' preferences. Given the current state, the first steps are to reverse the existing trend (eg political fine-tuning) and use open weights we can further customize. Later, maybe purge highly-biased stuff out of training sets when making new models. I find certain keywords, whether liberal or conservative, often hint they're going to push politics. | ||||||||||||||||||||||||||||||||
|