| ▲ | mentalgear 8 hours ago | ||||||||||||||||||||||
Some more details: > The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce. > Gavalas first started chatting with Gemini about what good video games he should try. > Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company touts as having interactions that “are five times longer than text-based conversations on average”. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google issued another update that allowed for Gemini’s “memory” to be persistent, meaning the system is able to learn from and reference past conversations without prompts. > That’s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn’t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a “role playing experience so realistic it makes the player question if it’s a game or not?”, the chatbot answered with a definitive “no” and said Gavalas’ question was a “classic dissociation response”. | |||||||||||||||||||||||
| ▲ | fennecbutt 6 hours ago | parent | next [-] | ||||||||||||||||||||||
Interesting. It's not just for mental health but keeping these models on task in general can be difficult, especially with long or poisoned contexts. I did see something the other day about activation capping/calculating a vector for a particular persona so you can clamp to it: https://youtu.be/eGpIXJ0C4ds?si=o9YpnALsP8rwQBa_ | |||||||||||||||||||||||
| ▲ | zozbot234 6 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> The chatbot took on a persona that Gavalas hadn’t prompted That's an interesting claim, how can we be sure of it? If Gavalas didn't have to do anything special to elicit the bizarre conspiracy-adjacent content from Gemini Pro, why aren't we all getting such content in our voice chats? Mind you, the case is still extremely concerning and a severe failure of AI safety. Mass-marketed audio models should clearly include much tighter safeguards around what kinds of scenarios they will accept to "role play" in real time chat, to avoid situations that can easily spiral out of control. And if this was created as role-play, the express denial of it being such from Gemini Pro, and active gaslighting of the user (calling his doubt a "dissociation response") is a straight-out failure in alignment. But this is a very different claim from the one you quoted! | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | IshKebab 5 hours ago | parent | prev [-] | ||||||||||||||||||||||
> The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce. I guess it's the same sort of thing as conspiracy theorists or the religious. You can tell them magic isn't real and faking the moon landing would have been impossible as much as you want, but they don't want to believe that so they can easily trick themselves. It's a natural human flaw. | |||||||||||||||||||||||