Remix.run Logo
tailspin2019 5 days ago

I think the obvious risk with ChatGPT etc, is that despite all the talk about “safety”, these chatbots are fundamentally designed to act and communicate in a very “close to human” fashion, almost going to extraordinarily great lengths to do so (eg all the emotive TTS voices, the pandering, the “I’m so happy to hear that” etc) and human nature means that it is very very easy to just get drawn in and switch off the part of your brain that knows you’re not actually talking to a human.

Even I find this to be the case sometimes, as a developer using AI daily for years now. “Regular” non-technical users have literally no frame of reference by which to compare, judge or understand this technology… apart from drawing on their experience dealing with other humans.

Most people don’t yet have the necessary understanding or tools to know where the sharp edges are, or a good enough understanding of how the behaviour of these models (and their failure modes) can deviate wildly and abruptly from that of a human - in very unpredictable ways.

Many (most) users will also not understand that everything they say is influencing the responses coming back to them - allowing people to talk their way down certain avenues with the AI chatbot “playing along”. You can be completely guiding the direction of the discussion and influencing the replies you’re getting without knowing it.

You have to try to remember that you’re effectively talking to a very intelligent psychopath who has become extremely good at pretending to be your friend and who will say anything to keep up that pretence.

I’m not sure what the solution is, but it really annoys me when ChatGPT or Claude “pretends” to have emotions or says “Great idea!” based on nothing. I think that crap needs to be tuned out - at least for now - because it’s irresponsibly deceptive and sets the wrong expectations for non-technical users who do not (and cannot) possibly understand the boundaries and parameters of this technology. (Since even we as an industry don’t yet either).

I think “directly and deceptively pretending to act like a human” should have been part of the focus on safety and fine tuning, and instead of that, it seems they’ve doubled down on the opposite.

It’s like selling a real, loaded, gun but with the end of the barrel painted red to make it look like a toy.

That red bit sets expectations on how much damage this thing can do to me. The AI chatbot pretending to have emotions and care about me as a person is like that red paint. It directly signals the complete opposite information that the user actually needs. It tells them they’re safe when really, they’re holding a loaded fucking gun.