Remix.run Logo
DaveZale 2 days ago

why is this stuff legal?

there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."

lukev 2 days ago | parent | next [-]

That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them.

Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes.

duskwuff 2 days ago | parent | next [-]

> That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

Particularly given some documented instances where a user has asked the language model about similar warnings, and the model responded by downplaying the warnings, or telling the user to disregard them.

threatofrain 2 days ago | parent | prev | next [-]

It's not a one-person echo-chamber though, it also carries with it the smell and essence of a large corpus of human works. That's why it's so useful to us, and that's why it carries so much authority.

jvanderbot 2 days ago | parent | prev [-]

Well, hate to be that guy, but surgeons general warnings coincided with significant reduction in smoking. We've just reached the flattening of that curve. After decades of declines.

It's hard to believe that a prominent well - worded warning would do nothing but that's not to say it'll be effective for this.

Daviey 2 days ago | parent | next [-]

It reminds me of when the UK Prime Minister sent a letter to all 30 milliom households during the first Covid lockdown.

Printing and postage cost was about £5.8 million. At the time, I thought it was a waste of taxpayers’ money. A letter wouldn’t change anyone’s behaviour, least of all not mine or anyone I knew.

But the economics told a different story. The average cost of treating a Covid patient in intensive care ran to tens of thousands of pounds. The UK Treasury values preventing a single death at around £2 million (their official Value of a Prevented Fatality). That means if the letter nudged just three people into behaviour that prevented their deaths, it would have paid for itself. Three deaths, out of 30 million households.

In reality, the effect was likely far larger. If only 0.1% of households (30,000 families) changed their behaviour even slightly, whether through better handwashing, reduced contact, or staying home when symptomatic, those small actions would multiply during an exponential outbreak. The result could easily be hundreds of lives saved and millions in healthcare costs avoided.

Seen in that light, £5.8 million wasn’t wasteful at all. It was one of the smarteer investments of the pandemic.

What I dismissed as wasteful and pointless, turned out to be a great example of how what appears to be a large upfront cost can deliver returns that massively outweigh the initial outlay.

I changed my view and admitted I was wrong.

ianbicking 2 days ago | parent | prev | next [-]

The surgeon general warning came along with a large number of other measures to reduce smoking. If that warning had an effect, I would guess that effect was to prime the public for the other measures and generally to change consensus.

BUT, I think it's very likely that the surgeon general warning was closer to a signal that consensus had been achieved. That voice of authority didn't actually _tell_ anyone what to believe, but was a message that anyone could look around and use many sources to see that there was a consensus on the bad effects of smoking.

lukev 2 days ago | parent | prev [-]

Well if that's true, by all means.

But saying "This AI system may cause harm" reads to me as similar to saying "This delightful substance may cause harm."

The category error is more important.

mpalmer a day ago | parent | prev | next [-]

Your solution for protecting mentally ill people is to expect them to be rational and correctly interpret/follow advice?

Just to be safe, we better start attaching these warnings to every social media client. Can't be too careful

DaveZale 2 hours ago | parent | next [-]

good idea! Something along the lines of "this so-called social medium will suck away your time and energy, while your true human interactions wither into borderline nothing ness and you join the crowd of cynical angry incels, or worse". Trying to have a sense of humor here.

I used AI to find the location of an item in a large supermarket today. It guessed and was wrong, but the first human I saw inside knew the exact location and quantity remaining.

Why am I wasting my time? That should be a nagging question whenever we're online.

DrillShopper a day ago | parent | prev [-]

> Just to be safe, we better start attaching these warnings to every social media client.

This but completely unironically.

tomasphan 2 days ago | parent | prev | next [-]

Should we really demand this of every AI chat application to potentially avert a negative response from the tiny minority of users that blindly follow what they’re told? Who is going to enforce this? What if I host a private AI model for 3 users. Do I need that and what is the punishment for non compliance? You see where I’m going with this. The problem with your sentiment is that as soon as you draw a line it must be defined in excruciating detail or you risk unintended consequences.

2 days ago | parent | prev | next [-]
[deleted]
ETH_start 2 days ago | parent | prev [-]

[dead]