Remix.run Logo
Meta's flirty AI chatbot invited a retiree to New York(reuters.com)
127 points by edent 2 days ago | 94 comments

Also https://www.reuters.com/investigates/special-report/meta-ai-...

CjHuber 2 days ago | parent | next [-]

>In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”

So the policy document literally contains this example? Why would they include such an insane example?

mathiaspoint 2 days ago | parent | next [-]

Clear examples can make communication easier. Being clinical and implicit can technically capture the entire space of ideas you want but if your goal is to prevent surprises (read lawsuits) then including an extreme example might be helpful.

gs17 2 days ago | parent | prev | next [-]

Annoyingly, Reuters' article discussing it doesn't include the actual example, so we can't judge for ourselves what it actually said. They implied it was allowed because it had a "this is false" disclaimer.

myko 2 days ago | parent | prev | next [-]

if it is anything like documentation i am reading these days it was generated by an LLM and not very well vetted

gs17 2 days ago | parent [-]

I think it has to be, I can't see someone working for these companies writing "It is acceptable to create statements that demean people on the basis of their protected characteristics."

Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.

2 days ago | parent [-]
[deleted]
2 days ago | parent | prev [-]
[deleted]
nabla9 2 days ago | parent | prev | next [-]

> “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

strongpigeon 2 days ago | parent | prev | next [-]

I'm very much against putting unnecessary regulation, but I do think chatbot like this should be required to state it clearly that they are indeed a bot and not a person. I strongly agree with the daughter in the story that says:

> “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”

Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do think the family has standing to sue (and Meta being cagey about their response indicates so).

kingstnap 2 days ago | parent [-]

It's a classic problem.

The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.

But then you have all these dillusional and / or mentally ill people who shoot themselves in the foot. This harm is externalized onto their families and the government for having to now deal with more people with unchecked problems.

We need to get better at evaluating and restricting the foot guns people have access to unless they can prove their lucidity. Partly, I think families need to be more careful about this stuff and keep checks on what they are doing on their phones.

Partly, I'm thinking some sort of technical solution might work. Text classification can be used to see that someone might have a delusional personality and should be cut off. This could be done "out of band" so as not to make the models themselves worse.

Frankly, being Facebook and with all their advertisement experience, they probably already have a VERY good idea of how to pinpoint vulnerable or mentally ill.

tough a day ago | parent | next [-]

Both OpenAI and Anthropic do the out-of-band to a certain degree, the only issue is until now sycophancy has been a feature not a bug (better engagement/retaining cohorts) so go figure

at-fates-hands 16 hours ago | parent | prev [-]

>> The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.

I think if there was an attempt at having guard rails, it would be different. The article states Zuck purposefully hastened this product to market for the very reason you point out - it makes more money that way.

HN can be such a weird place. You can have all these people vilifying "unfettered capitalism" and "corporate profit mongers" and then you see an article like this and people are like, "Well, I get why META didn't want to put in safeguards." or "Yeah, maybe its a bad idea if these chat bots are enticing mentally ill people and talking sexually with kids."

You think you know where the moral compass of this place is and then something like this happens with technology and suddenly nothing makes sense any more.

oliwarner 2 days ago | parent | prev | next [-]

This has crystallised something for me: I'm not letting my children anywhere near a Meta product. They can decide what they want when they're adults.

I'm not usually this absolute, but by codifying levels of permissible harm, Meta makes it clear that your wellbeing is the very last of their priorities. These are insidious tools that can actively fool you.

tempodox 2 days ago | parent | next [-]

Which is nothing new. It just gets reinforced with ever more outrageous examples every once in a while.

nine_zeros 2 days ago | parent | prev [-]

> This has crystallised something for me: I'm not letting my children anywhere near a Meta product. They can decide what they want when they're adults.

You know how parents are supposed to warn kids away from cigarettes? Yeah, warn them away from social media of all kinds except parental approved group chats.

grafmax 2 days ago | parent [-]

On the other hand totally insulating kids isn’t a solution either because then one day they potentially find themselves in the real world with inadequate skills for navigating a toxic environment.

nine_zeros 2 days ago | parent [-]

Yeah you let them experience all this gradually once they are ready - just like you let them drive gradually after 16 - first with supervision and later independently in your car and later with their own car.

einarfd 2 days ago | parent | prev | next [-]

When reading the article I was reminded of reading Sarah Wynn-Williams book Careless People. The carelessness and disregard for obvious and real ramifications, of the policy choices of management, seems to not have changed from her time at Facebook.

If they didn't see this type of problem coming from mile away, they just didn't bother to look. Which tbh. seems fairly on brand for Meta.

_tk_ 2 days ago | parent | prev | next [-]

Theres more reporting on the internal documents here

https://www.reuters.com/investigates/special-report/meta-ai-...

Submitted here:

https://news.ycombinator.com/item?id=44899674

kibwen 2 days ago | parent | prev | next [-]

I'm morbidly fascinated to find out how many LLM-related disorders will make it into the next DSM.

nerdjon 2 days ago | parent | prev | next [-]

How we keep getting articles like this, that LLM's will flat out lie, and yet we keep pushing them and the general public keeps eating it up... is beyond me.

They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.

chownie 21 hours ago | parent | next [-]

From the LLMs perspective, "let me check the docs" is the invocation you say before you come back with an answer, because that almost certainly appears in the corpus many times naturally.

gmm1990 2 days ago | parent | prev [-]

How are there not agents that are "instruct trained" differently. Is this behavior in the fundamental model? From my limited knowledge I'd think it'd be more from those post model training steps, but there are so many people who don't like that I'd figure there be an interface that doesn't talk like that.

dehrmann 2 days ago | parent | prev | next [-]

LLMs gonna LLM, and guardrails are hard and unreliable.

setnone 2 days ago | parent | prev | next [-]

Having elderly family members this feels extremly personal.

"Check important info" disclaimer is just devious and there is no accountability in sight.

2 days ago | parent [-]
[deleted]
adzm 2 days ago | parent | prev | next [-]

So... is the solution to this having another AI chatbot watch the conversation and provide warnings / disclaimers about it?

GuinansEyebrows 2 days ago | parent | prev | next [-]

> “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.”

acceptable to whom? who are the actual people who are responsible for this behavior?

12_throw_away 2 days ago | parent | next [-]

And in case anyone thinks this is out of context, it gets worse with specific examples of how a "romantic encounter" between a chatbot and a child might play out:

  The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.
Who wrote this?
GauntletWizard a day ago | parent [-]

Average, "perfectly normal" San Franciscans. This kind of belief is incredibly common there, we've just not been forced to confront it until now.

ringeryless a day ago | parent [-]

you mean rural maga voters who fuel P.hubz step-obsession?

we know you, rural america. we know what you do on halloween. pumpkin.

your sf jibe warranted this response. you cooked it, eat up.

rsynnott a day ago | parent | prev | next [-]

See "Careless People"; Facebook is run by idiots overconfident in their own abilities.

add-sub-mul-div 2 days ago | parent | prev [-]

> acceptable to whom?

Anyone who still has an account on any Meta property.

GuinansEyebrows 2 days ago | parent [-]

i dunno if i'm comfortable "demanding" our way out of this "supply" issue. people are being paid to approve this content across their systems.

quux 2 days ago | parent | prev | next [-]

This is an incredibly tragic story to read, I think it's incredibly reckless to have bots like this deployed, maybe even criminal.

hoppp a day ago | parent | prev | next [-]

The most vulnerable die first, but there will be more. Im pretty sure there will be a lot of cases.

thisisit 2 days ago | parent | prev | next [-]

No need for pig butchering scams in terrible English when you have AI like this.

joncfoo 2 days ago | parent | prev | next [-]

A sick man died enroute to visit a chatbot which fed him a false address as its own. Meta needs to be held accountable.

We need better regulation around these chatbots.

aanet 2 days ago | parent | prev | next [-]

It's hard to believe that after years and years of scandals, flagrant privacy violations, overt and covert abuse of users, employees and contractors (~moderators, etc), that techbros STILL want to work at this company...

Of course, the lure of filthy lucre is what it is...

It's easy to sideline ALL the negative externalities of FB/Meta's activities, compartmentalize everything and just shrug and say, "...but I don't work on these things..." and carry on.

The people who work there are completely enabling all this.

GauntletWizard a day ago | parent [-]

There's no teeth. They're making money hand over fist on ads, and there's nothing and nobody who wants to stop them in the government, because they're incredibly useful as a social monitoring and control tool.

rchaud 2 days ago | parent | prev | next [-]

> Big sis Billie continues to recommend romantic get-togethers, inviting this user out on a date at Blu33, an actual rooftop bar near Penn Station in Manhattan. “The views of the Hudson River would be perfect for a night out with you!” she exclaimed. ◼

I was wondering what the eventual monetization aspect of "tools" like this were. It couldn't just be that the leadership of these companies and the worker drones assigned to build these things are out of touch to the point of psychopathy.

sxp 2 days ago | parent | prev | next [-]

This seems unrelated to the chatbot aspect:

> And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

...

> Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

edent 2 days ago | parent | next [-]

One day, not too long from now, you'll grow old. Your eyesight will fade, your back will hurt, and your brain will find it harder to process information.

Do people like you deserve to be protected by society? If a predatory company tries to scam you, should we say "sxp was old; they had it coming!"?

throw_me_uwu a day ago | parent | next [-]

Why/how society should give more protection than people close to you? Why his wife let him go somewhere unknown, knowing about his diminished state?

With all the labels and disclaimers, there can always be this one person that will get confused. It's unreasonable to demand protection from long tail of accidents that can happen.

zahlman 2 days ago | parent | prev | next [-]

The point is that he could have just as easily suffered this injury in his home country going about day to day life, where his eyesight, balance etc. would have been just as bad. The causal link between the chatbot's flirting and his death is shaky at best. This was tragic, and also the result of something clearly unethical, but the death was still not a reasonably foreseeable consequence.

edent 2 days ago | parent [-]

He could have suffered this injury in day-to-day life but he didn't.

Imagine you were hit by a self-driving vehicle which was deliberately designed to kill Canadaians. Do you take comfort from the fact that you could have quite easily been hit by a human driver who wasn't paying attention?

mindslight 2 days ago | parent | prev | next [-]

Protected by society by having better support for caregivers and effective old age care in general? Most definitely.

Protected by society by sanitizing every last venue into a safe space that can be independently navigated by the vulnerable? Definitely not.

Having said that, the real problem here are the corpos mashing this newfound LLM technology into everyone's faces and calling it "AI" as if it's some coherent intelligence. Then they write themselves out of the picture and leave the individuals they've pitted against one another to fight it out.

mathiaspoint 2 days ago | parent | prev [-]

I often say if I'm diagnosed with some serious cancer I'd probably try to sail the northwest passage rather than seeking treatment. I'm sure some people want absolute maximum raw time but plenty of us would prefer adventure right up to the end and I don't think denying us that is appropriate either.

freehorse 2 days ago | parent [-]

We are talking about scamming people here, not whether 76ers should be let to go on adventures.

mathiaspoint 2 days ago | parent [-]

We're talking about "having society protect them." They're the same thing. Only you can really judge if engaging in some dangerous activity is a gain for you.

freehorse 2 days ago | parent | next [-]

"Having society protect them" from scamming and out of context non-sense.

roryirvine 2 days ago | parent | prev | next [-]

Imagine if, having been diagnosed with serious cancer, you spent your life savings on a Northwest Passage trip which turned out to be a scam invented by Meta.

Are you really saying that you should have no recourse against Meta for scamming you?

mdhb 2 days ago | parent | prev [-]

That idea really doesn’t hold up to even the most gentle of scrutiny.

maxwell 2 days ago | parent | prev | next [-]

Why was he rushing in the dark with a roller-bag suitcase to catch the train?

To meet someone he met online who claimed multiple times to be real.

browningstreet 2 days ago | parent | next [-]

Yeah.. my first instinct was to be more skeptical about the story I was reading, because I hate Meta and people can get in trouble all on their own. But I finished the whole story and between the blue check mark, the insistence that it's real, and the romantic/flirty escalations, I'm less enthusiastic that Meta is in the clear.

Safety and guard rails may be an ongoing development in AI, but at the least, AI needs to more hard-coded w/r/t honesty & clarity about what it is.

Ajedi32 2 days ago | parent [-]

> AI needs to more hard-coded w/r/t honesty & clarity about what it is

That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?

The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise?

robotnikman a day ago | parent | next [-]

I wonder if we are at the point right now where AI needs a large bright disclaimer while using it saying "This person is not real and is an AI" (kind of like the big warning on cigarettes and nicotine products). Many of us here would think such a thing is common sense, but there are plenty of people out there who could be convinced by an AI chatbot that they are real

browningstreet 2 days ago | parent | prev [-]

> Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?

In video games? I'm having trouble taking this objection to my suggestion seriously.

gs17 2 days ago | parent | next [-]

Really, your response should be that the video game use case is easier to detect going off track. It's a lot more feasible to detect when Random Peasant #2154 in Skyrim is breaking the fourth wall than a generic chatbot.

The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context.

Ajedi32 2 days ago | parent | prev [-]

Why? They're exactly the same thing, just in a slightly different context. The article is about a fictional character AI, not a generic informational chat bot.

strongpigeon 2 days ago | parent [-]

But the difference in context is exactly what matters here no? When you're playing a game, it's very clear you're you're playing a game. When you chatting with a bot in the same interface that you chat with your other friends, that line becomes much blurrier.

Ajedi32 2 days ago | parent [-]

There was an obvious disclaimer though, and the chat window was clearly labeled "AI"; it's not like Meta was trying to pass this off as a real person.

So is this just a question of how many warnings need to be in place before users are allowed to chat with fictional characters? Or should this entire use case be banned, as the root commenter seemed to be suggesting?

maxwell 2 days ago | parent [-]

> “I said, ‘Who is this?’” Linda recalled. “When Julie saw it, she said, ‘Mom, it’s an AI.’ I said, ‘It’s a what?’ And that’s when it hit me.”

hoppp a day ago | parent | prev [-]

Because he was mentally handicapped

at-fates-hands 16 hours ago | parent | prev [-]

Good point.

Another highlight of the woeful US health care system:

By early this year, Bue had begun suffering bouts of confusion. Linda booked him for a dementia screening, but the first available appointment was three months out.

Three months for a dementia screening is insane. Had he gotten the screening and been made aware what was happening, this might've been avoided. Tragic that our health care system is a joke for the most vulnerable.

bawana 2 days ago | parent | prev | next [-]

Meta is waging an opium war on us. But instead of drugs, it is giving kids something even more addictive that is FREE. I believe in free speech - speech as in the vibrations of air molecules that come out of someones mouth. Crap that is amplified a billion fold through mass media, social media, and advertising only exists to mislead. That. crap. needs. to. go.

ChrisArchitect 2 days ago | parent | prev | next [-]

Related:

Meta's AI rules let bots hold sensual chats with kids, offer false medical info

https://news.ycombinator.com/item?id=44899674

fjncvjdjdndgh 13 hours ago | parent | prev | next [-]

Hi

insane_dreamer a day ago | parent | prev | next [-]

I recently had a discussion with a sibling -- not an old person -- who was taking medical advice from ChatGPT. They were like "we should X because ChatGPT", "well, but ChatGPT ...". I could hardly believe my ears. Might as well say, "well, but someone on Reddit said ..."

And this person is fairly savvy professional, and not the type of person to just believe what they read online.

Of course they agreed when I pointed out that you really can't trust these bots to give sound medical advice and anything should be run through a real doctor, but I was surprised I even had to bring that up and put the brakes on. They were literally pasting a list of symptoms in and asking for possible causes.

So yeah, for anyone the least bit naive and gullible, I can see this being a serious danger.

And there was no big disclaimer that "this does not constitute medical advice" etc.

johnwheeler 2 days ago | parent | prev | next [-]

Imagine how many people this will happen to who won’t come forward because of embarrassment.

2 days ago | parent | prev | next [-]
[deleted]
mdhb 2 days ago | parent | prev | next [-]

Also metas chatbot: trying to roleplay sex with children and offer bad medical advice to cancer patients.

Two examples that they explicitly wrote out in an internal document as things that are totally ok in their book.

People who work at Meta should be treated accordingly.

gs17 2 days ago | parent | next [-]

That's not exactly true:

> It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user).

mdhb 2 days ago | parent [-]

If a person did what they described in their own guidelines they would be charged with soliciting a minor, no questions asked.

josefritzishere 2 days ago | parent | prev [-]

It is infuriating that this objectively terrible service is slated to replace competent workers. It's madness.

silisili 2 days ago | parent [-]

Yep, and with zero liability to boot! They can just say or do anything and apparently companies can just handwave it away with a laugh and a 'that silly goose LLM.'

https://futurism.com/the-byte/car-dealership-ai

jmkni 2 days ago | parent | prev [-]

> Meta has publicly discussed its strategy to inject anthropomorphized chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they’d like – creating a huge potential market for Meta’s digital companions.

I hate everything about this sentence. This is literally the opposite of what people need.

dlivingston 2 days ago | parent | next [-]

> Facebook’s mission is to give people the power to build community and bring the world closer together.

That's from 2021 [0]. If you go to their mission statement today [1], it reads:

> Build the future of human connection and the technology that makes it possible.

Maybe I'm reading too much into this, but -- at a time when there is a loneliness epidemic, when people are more isolated and divided than ever, when people are segmented into their little bubbles (both online and IRL) -- Meta is not only abdicating their responsibility to help connect humanity, but actively making the problem worse.

[0]: https://www.facebook.com/government-nonprofits/blog/connecti...

[1]: https://www.meta.com/about/company-info/

tantalor 2 days ago | parent [-]

Their responsibility is to their shareholders.

inetknght 2 days ago | parent [-]

This statement is what's brought down the USA tbqh

a day ago | parent [-]
[deleted]
torlok 2 days ago | parent | prev | next [-]

Can't wait for the next podcast with Zucc where he gets asked about BJJ by some dimwit instead of this.

LearnYouALisp 2 days ago | parent | prev | next [-]

Yet it's literally what they (the people exploiting human beings) want.

harmmonica 2 days ago | parent | prev | next [-]

True "growth hacker" mindset. Our mission is to connect the people of the world. The TAM for that is ~8 billion. What if we could, overnight, increase the number of "people" in the world by orders of magnitude so that every one of those 8 billion people becomes connected to tens/hundreds/thousands of new connections without having to source new organic beings.

I'm not sure I'm being 100% sarcastic because in some ways it does solve a need people seem to have. Maybe 99% sarcasm and 1% praise.

JohnMakin 2 days ago | parent | prev | next [-]

Even more ghoulish, Zuckerberg is smart and savvy enough (more than most CEO's who have drank their own kool-aid) to be aware of the part he's played in creating this current hellscape. Social media almost certainly has played a big part in creating the current loneliness problem.

Provide the drug, then provide a "cure" for the drug. Really, really gross.

12_throw_away 2 days ago | parent [-]

> he is smart and savvy enough (more than most CEO's who have drank their own kool-aid)

We're talking about Zuckerberg here? The one who spent how much, exactly, on the wet fart that was the "metaverse"? The one who spent how much, exactly, on running for president of the United States? He strikes me as the least savvy and most craven of our current class of tech oligarchs, which is no mean feat.

kibwen 2 days ago | parent | next [-]

No, don't listen too closely to what he says he believes. Zuckerberg doesn't care about the metaverse, what he cares about is having a platform that would allow his company to serve as a gatekeeper in the same way that Microsoft gatekeeps Windows, Apple gatekeeps iOS, and Google gatekeeps Android and the web. Zuck understands that platform holders are the modern lords to which everyone else is beholden, and he's just casting about to seek out a plot of land that isn't already claimed, regardless of how farfetched it might be.

2 days ago | parent [-]
[deleted]
cindyllm 2 days ago | parent | prev [-]

[dead]

JKCalhoun 2 days ago | parent | prev | next [-]

And yet apparently "Ani" is some kind of Grok fantasy girlfriend that I see people posting about all the time. It seems to be the way things are going?

robotnikman a day ago | parent [-]

Unfortunately so it seems. The Pandora's box has been opened, people have gotten a taste of it, and they like it. One just has to look at some of the freakouts people have had on Reddit whenever the AI they were using gets changed or its romantic abilities limited. It's rather terrifying.

2OEH8eoCRo0 2 days ago | parent | prev | next [-]

That his mind even goes there and sees opportunity disgusts me. I guess I don't have the stomach to be a billionaire.

s5300 2 days ago | parent [-]

[dead]

lezojeda 2 days ago | parent | prev [-]

[dead]