Remix.run Logo
dkersten 5 hours ago

This is similar to why I prefer LLM's to behave less human-like and more robotic and machine-like, because they're not humans or human-like, they are robotic and machine-like. The chatbot is not my friend and it can't be my friend, so it shouldn't behave like its trying to be my friend. It should answer my queries and requests with machine-like no-nonsense precision and accuracy, not try to make an emotional connection. Its a tool, not a person.

sethammons 5 hours ago | parent | next [-]

You're absolutely right.

javier_e06 3 hours ago | parent | prev [-]

I hear you ( I am not an LLM ). I can't deny that the "You are absolutely right" gives me a shot of confidence and entices me to continue the dialog.

I am being manipulated.

I prefer the machine to reply:

Affirmative.

Unfortunately this billion dollar LLM enterprises are competing for eyeballs and clicks.

jerf 2 hours ago | parent | next [-]

With some effort, you can train yourself to respond to "You are absolutely right" with being offended at the attempt to manipulate.

It's good training and has been since long before the AIs came along. For instance, the correct emotional response to a highly attractive man/woman on a billboard pitching some product, regardless of your opinions on the various complicated issues that may arise in such a situation, is to be offended that someone is trying to manipulate you through your basic human impulses. The end goal here isn't even the offendedness itself, but to block out as much as is possible the effects of the manipulation. It may not be completely possible, but then, it doesn't need to be, and I'm not averse to a bit of overcompensation here anyhow.

Whether LLMs actually took this up a notch I'd have to think about, but they certainly blindsided a lot of people who had not yet developed defenses against a highly conversational, highly personalized boot licking. Up to this point, the mass media blasted out all sorts of boot licking and chain-yanking and instinct manipulation of every kind they could think of, but the personalization was mostly limited to maybe printing your name on the flyer in your mailbox, and our brains could tell it wasn't actually a conversation we were in. LLMs can tell you exactly how wonderful you personally are.

Best get these defenses in place now. We're single-digit years at best away from LLMs personalizing all kinds of ads to this degree.

computerthings an hour ago | parent [-]

[dead]

TomaszZielinski 3 hours ago | parent | prev [-]

My favorite reply is something like: „You’re The Real GOAT!!! And now let’s just quickly clarify some minor points”, followed by a complete destruction of my arguments :).