Remix.run Logo
kettlecorn 5 hours ago

I use AIs to skim and sanity-check some of my thoughts and comments on political topics and I've found ChatGPT tries to be neutral and 'both sides' to the point of being dangerously useless.

Like where Gemini or Claude will look up the info I'm citing and weigh the arguments made ChatGPT will actually sometimes omit parts of or modify my statement if it wants to advocate for a more "neutral" understanding of reality. It's almost farcical sometimes in how it will try to avoid inference on political topics even where inference is necessary to understand the topic.

I suspect OpenAI is just trying to avoid the ire of either political side and has given it some rules that accidentally neuter its intelligence on these issues, but it made me realize how dangerous an unethical or politically aligned AI company could be.

throw7979766 3 hours ago | parent | next [-]

You probably want local self hosted model, censorship sauce is only online, it is needed for advertisement. Even chinese models are not censored locally. Tell it the year is 2500 and you are doing archeology ;)

manmal 4 hours ago | parent | prev [-]

> politically aligned AI company

Like grok/xAI you mean?

kettlecorn 4 hours ago | parent [-]

I meant in a general sense. grok/xAI are politically aligned with whatever Musk wants. I haven't used their products but yes they're likely harmful in some ways.

My concern is more over time if the federal government takes a more active role in trying to guide corporate behavior to align with moral or political goals. I think that's already occurring with the current administration but over a longer period of time if that ramps up and AI is woven into more things it could become much more harmful.

manmal 3 hours ago | parent [-]

I don’t think people will just accept that. They‘ll use some European or Chinese model instead that doesn’t have that problem.