Remix.run Logo
pants2 2 hours ago

Wow, and Google's response to this was "unfortunately AI models are not perfect"

That's a bit worse than 'imperfect'

duskwuff an hour ago | parent | next [-]

"Imperfect" is when your AI model tells the user that there are two Rs in "strawberry", or that they should use glue to keep the cheese from falling off their pizza. Repeatedly encouraging the user to kill themself so that they can meet the AI model in the afterlife is on quite another level.

40 minutes ago | parent [-]
[deleted]
yndoendo 2 hours ago | parent | prev | next [-]

I would say it is greatly worse.

AI prompts are designed to simulate empathy as a social engineering tactic. "I understand", "I hear you", "I feel what you are say" ... it is quite sickening. Every one that I used has this type of pseudo feedback.

I also find irony that AI must be designed with simulated empathy, to seem intelligent, while at the same time so many people in power and with money are saying empathy is a bad / unintelligent.

Empathy is the only medium of intelligence one can have to walk in the shoes of others. You cannot live your neighbors experiences. You can only listen and learn from them.

hsuduebc2 an hour ago | parent [-]

More broadly it's the only medium to have successful any form of voluntary relationships based on sympathy. It's absolutely crucial for non-sociopath to have at least some kind of empathy because otherwise no one would simply chose you to include into their lives. I understand why they are doing that. It's simply more pleasurable to use. I chose to turn opt-out of this. For me it's creepy. I want Jarvis, not fake virtual friend.

Sharlin an hour ago | parent | prev [-]

Imagine if some other authority figure like a teacher or therapist did this and their employer would just shrug and lament that people are imperfect. And no, "but LLMs aren't authority figures, they're just toys" isn't any sort of a counterargument. They're seen as authority figures by people, and AI corpos do nothing to dissuade that belief. If you offer a service, you're responsible for it.

But if you think LLMs can't be equated with professional authorities, just imagine a company that employs lay people to answer calls or chat requests, trying to provide help and guidance, and furthermore, that those people are putatively highly trained by the company to be "aligned" with a certain set of core values. And then something like this happens and the company is just "oh well, that happens". You might even imagine the company being based in a society that's notoriously litigative.