▲ | strongpigeon 2 days ago | |||||||||||||
I'm very much against putting unnecessary regulation, but I do think chatbot like this should be required to state it clearly that they are indeed a bot and not a person. I strongly agree with the daughter in the story that says: > “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.” Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do think the family has standing to sue (and Meta being cagey about their response indicates so). | ||||||||||||||
▲ | kingstnap 2 days ago | parent [-] | |||||||||||||
It's a classic problem. The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings. But then you have all these dillusional and / or mentally ill people who shoot themselves in the foot. This harm is externalized onto their families and the government for having to now deal with more people with unchecked problems. We need to get better at evaluating and restricting the foot guns people have access to unless they can prove their lucidity. Partly, I think families need to be more careful about this stuff and keep checks on what they are doing on their phones. Partly, I'm thinking some sort of technical solution might work. Text classification can be used to see that someone might have a delusional personality and should be cut off. This could be done "out of band" so as not to make the models themselves worse. Frankly, being Facebook and with all their advertisement experience, they probably already have a VERY good idea of how to pinpoint vulnerable or mentally ill. | ||||||||||||||
|