Remix.run Logo
sxp 2 days ago

This seems unrelated to the chatbot aspect:

> And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

...

> Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

edent 2 days ago | parent | next [-]

One day, not too long from now, you'll grow old. Your eyesight will fade, your back will hurt, and your brain will find it harder to process information.

Do people like you deserve to be protected by society? If a predatory company tries to scam you, should we say "sxp was old; they had it coming!"?

throw_me_uwu a day ago | parent | next [-]

Why/how society should give more protection than people close to you? Why his wife let him go somewhere unknown, knowing about his diminished state?

With all the labels and disclaimers, there can always be this one person that will get confused. It's unreasonable to demand protection from long tail of accidents that can happen.

zahlman 2 days ago | parent | prev | next [-]

The point is that he could have just as easily suffered this injury in his home country going about day to day life, where his eyesight, balance etc. would have been just as bad. The causal link between the chatbot's flirting and his death is shaky at best. This was tragic, and also the result of something clearly unethical, but the death was still not a reasonably foreseeable consequence.

edent 2 days ago | parent [-]

He could have suffered this injury in day-to-day life but he didn't.

Imagine you were hit by a self-driving vehicle which was deliberately designed to kill Canadaians. Do you take comfort from the fact that you could have quite easily been hit by a human driver who wasn't paying attention?

mindslight 2 days ago | parent | prev | next [-]

Protected by society by having better support for caregivers and effective old age care in general? Most definitely.

Protected by society by sanitizing every last venue into a safe space that can be independently navigated by the vulnerable? Definitely not.

Having said that, the real problem here are the corpos mashing this newfound LLM technology into everyone's faces and calling it "AI" as if it's some coherent intelligence. Then they write themselves out of the picture and leave the individuals they've pitted against one another to fight it out.

mathiaspoint 2 days ago | parent | prev [-]

I often say if I'm diagnosed with some serious cancer I'd probably try to sail the northwest passage rather than seeking treatment. I'm sure some people want absolute maximum raw time but plenty of us would prefer adventure right up to the end and I don't think denying us that is appropriate either.

freehorse 2 days ago | parent [-]

We are talking about scamming people here, not whether 76ers should be let to go on adventures.

mathiaspoint 2 days ago | parent [-]

We're talking about "having society protect them." They're the same thing. Only you can really judge if engaging in some dangerous activity is a gain for you.

freehorse 2 days ago | parent | next [-]

"Having society protect them" from scamming and out of context non-sense.

mdhb 2 days ago | parent | prev | next [-]

That idea really doesn’t hold up to even the most gentle of scrutiny.

roryirvine 2 days ago | parent | prev [-]

Imagine if, having been diagnosed with serious cancer, you spent your life savings on a Northwest Passage trip which turned out to be a scam invented by Meta.

Are you really saying that you should have no recourse against Meta for scamming you?

maxwell 2 days ago | parent | prev | next [-]

Why was he rushing in the dark with a roller-bag suitcase to catch the train?

To meet someone he met online who claimed multiple times to be real.

browningstreet 2 days ago | parent | next [-]

Yeah.. my first instinct was to be more skeptical about the story I was reading, because I hate Meta and people can get in trouble all on their own. But I finished the whole story and between the blue check mark, the insistence that it's real, and the romantic/flirty escalations, I'm less enthusiastic that Meta is in the clear.

Safety and guard rails may be an ongoing development in AI, but at the least, AI needs to more hard-coded w/r/t honesty & clarity about what it is.

Ajedi32 2 days ago | parent [-]

> AI needs to more hard-coded w/r/t honesty & clarity about what it is

That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?

The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise?

robotnikman 2 days ago | parent | next [-]

I wonder if we are at the point right now where AI needs a large bright disclaimer while using it saying "This person is not real and is an AI" (kind of like the big warning on cigarettes and nicotine products). Many of us here would think such a thing is common sense, but there are plenty of people out there who could be convinced by an AI chatbot that they are real

browningstreet 2 days ago | parent | prev [-]

> Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?

In video games? I'm having trouble taking this objection to my suggestion seriously.

gs17 2 days ago | parent | next [-]

Really, your response should be that the video game use case is easier to detect going off track. It's a lot more feasible to detect when Random Peasant #2154 in Skyrim is breaking the fourth wall than a generic chatbot.

The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context.

Ajedi32 2 days ago | parent | prev [-]

Why? They're exactly the same thing, just in a slightly different context. The article is about a fictional character AI, not a generic informational chat bot.

strongpigeon 2 days ago | parent [-]

But the difference in context is exactly what matters here no? When you're playing a game, it's very clear you're you're playing a game. When you chatting with a bot in the same interface that you chat with your other friends, that line becomes much blurrier.

Ajedi32 2 days ago | parent [-]

There was an obvious disclaimer though, and the chat window was clearly labeled "AI"; it's not like Meta was trying to pass this off as a real person.

So is this just a question of how many warnings need to be in place before users are allowed to chat with fictional characters? Or should this entire use case be banned, as the root commenter seemed to be suggesting?

maxwell 2 days ago | parent [-]

> “I said, ‘Who is this?’” Linda recalled. “When Julie saw it, she said, ‘Mom, it’s an AI.’ I said, ‘It’s a what?’ And that’s when it hit me.”

hoppp 2 days ago | parent | prev [-]

Because he was mentally handicapped

at-fates-hands a day ago | parent | prev [-]

Good point.

Another highlight of the woeful US health care system:

By early this year, Bue had begun suffering bouts of confusion. Linda booked him for a dementia screening, but the first available appointment was three months out.

Three months for a dementia screening is insane. Had he gotten the screening and been made aware what was happening, this might've been avoided. Tragic that our health care system is a joke for the most vulnerable.