| ▲ | thewebguyd 2 hours ago | |
On the flip side, gemini recommended the crisis hotline to the guy. We can't safeguard things to the point of uselessness. I'm not even sure there is a safeguard you can put in place for a situation like this other than recommending the crisis line (which Gemini did), and then terminating the conversation (which it did not do). But, in critical mental health situations, sometimes just terminating the conversation can also have negative effects. Maybe LLMs need sort of a surgeon general's warning "Do not use if you have mental health conditions or are suicidal"? | ||
| ▲ | piva00 2 hours ago | parent [-] | |
> and then terminating the conversation (which it did not do) This is exactly the safeguard. Terminating the conversation is the only way to go, these things don't have a world model, they don't know what they are doing, there's no way to correctly assess the situation at the model level. No more conversation, that's the only way even if there might be jailbreaks to circumvent for a motivated adversary. | ||