Remix.run Logo
idle_zealot 6 days ago

I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.

AIPedant 6 days ago | parent | next [-]

Yes, if this were an adult human OpenAI employee DMing this stuff to a kid through an official OpenAI platform, then

a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder

b) OpenAI would be deeply (and deservedly) vulnerable to civil liability

c) state and federal regulators would be on the warpath against OpenAI

Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.

[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.

themafia 6 days ago | parent | next [-]

> It is a somewhat ugly constitutional question whether this speech would be protected

It should be stated that the majority of states have laws that make it illegal to encourage a suicide. Massachusetts was not one of them.

> and any civil-liberties minded person understands the difficult issues this case raises

He was in his truck, which was configured to pump exhaust gas into the cab, prepared to kill himself when he decided halt and exit his truck. Subsequently he had a text message conversation with the defendant who actively encouraged him to get back into the truck and finish what he had started.

It was these limited and specific text messages which caused the judge to rule that the defendant was guilty of manslaughter. Her total time served as punishment was less than one full year in prison.

> These issues are moot if the speech is between an adult and a child

They were both taking pharmaceuticals meant to manage depression but were _known_ to increase feelings of suicidal ideation. I think the free speech issue is an important criminal consideration but it steps directly past one of the most galling civil facts in the case.

aidenn0 6 days ago | parent | prev | next [-]

IANAL, but:

One first amendment test for many decades has been "Imminent lawless action."

Suicide (or attempted suicide) is a crime in some, but not all states, so it would seem that in any state in which that is a crime, directly inciting someone to do it would not be protected speech.

For the states in which suicide is legal it seems like a much tougher case; making encouraging someone to take a non-criminal action itself a crime would raise a lot of disturbing issues w.r.t. liberty.

This is distinct from e.g. espousing the opinion that "suicide is good, we should have more of that." Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).

Depending on the context, suggesting that a specific person is terrible and should kill themselves might be unprotected "fighting words" if you are doing it as an insult rather than a serious suggestion (though the bar for that is rather high; the Westboro Baptist Church was never found to have violated that).

AIPedant 6 days ago | parent | next [-]

I think the "encouraging someone to take a non-criminal action" angle is weakened in cases like this: the person is obviously mentally ill and not able to make good decisions. "Obvious" is important, it has to be clear to an average adult that the other person is either ill or skillfully feigning illness. Since any rational adult knows the danger of encouraging suicidal ideation in a suicidal person, manslaughter is quite plausible in certain cases. Again: if this ChatGPT transcript was a human adult DMing someone they knew to be a child, I would want that adult arrested for murder, and let their defense argue it was merely voluntary manslaughter.

arcticbull 6 days ago | parent | prev [-]

> Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).

Fun fact, much of the existing framework on the boundaries of free speech come from Brandenburg v. Ohio. You probably won't be surprised to learn that Brandenburg was the leader of a local Klan chapter.

mac-mc 6 days ago | parent | prev | next [-]

There are entire online social groups on discord of teens encouraging suicidal behavior with each other because of all the typical teen reasons. This stuff has existed for a while, but now it's AI flavored.

IMO I think AI companies do have the ability out of all of them to actually strike the balance right because you can actually make separate models to evaluate 'suicide encouragement' and other obvious red flags and start pushing in refusals or prompt injection. In communication mediums like discord and such, it's a much harder moderation problem.

blackqueeriroh 6 days ago | parent | prev | next [-]

Section 230 changes 2) and 3). OpenAI will argue that it’s user-generated content, and it’s likely that they would win.

AIPedant 6 days ago | parent [-]

I don't think they would win, the law specifies a class of "information content provider" which ChatGPT clearly falls into: https://www.lawfaremedia.org/article/section-230-wont-protec...

See also https://hai.stanford.edu/news/law-policy-ai-update-does-sect... - Congress and Justice Gorsuch don't seem to think ChatGPT is protected by 230.

nickm12 5 days ago | parent | prev | next [-]

The hypothetical comparing ChatGPT to a human OpenAI employee is instructive, but we can also compare ChatGPT to a lawnmower sold by a company. We have product safety laws and the ability to regulate products that companies put on the market.

teiferer 6 days ago | parent | prev [-]

> state and federal regulators would be on the warpath against OpenAI

As long as lobbies and donators can work against that, this will be hard. Suck up to Trump and you will be safe.

edanm 6 days ago | parent | prev | next [-]

If ChatGPT has helped people be saved who might otherwise have died (e.g. by offering good medical advice that saved them), are all those lives saved also something you "attribute" to OpenAI?

I don't know if ChatGPT has saved lives (thought I've read stories that claim that, yes, this happened). But assuming it has, are you OK saying that OpenAI has saved dozens/hundreds of lives? Given how scaling works, would you be OK saying that OpenAI has saved more lives than most doctors/hospitals, which is what I assume will happen in a few years?

Maybe your answer is yes to all the above! I bring this up because lots of people only want to attribute the downsides to ChatGPT but not the upsides.

nkrisc 6 days ago | parent | next [-]

In any case, if you kill one person and separately save ten people, you’ll still be prosecuted for killing that one person.

mothballed 6 days ago | parent | next [-]

That's not the standard we hold medical care providers, pharmaceutical companies, or even cops to. Not that I'm saying it would justify it one way or another if we did.

Orygin 6 days ago | parent | next [-]

It absolutely is? If a doctor is responsible for negligence resulting in the death of someone, they don't get a pass because they saved 10 other people in their career.

mothballed 6 days ago | parent | next [-]

It's not negligent to perform a procedure knowing it will kill some of the patients who would have otherwise lived healthy lives, though.

Orygin 6 days ago | parent | next [-]

If they would otherwise live healthy lives, why perform the procedure?

Most likely the patient is informed by the practitioner about the risks and they can make an informed decision. That is not the case about ChatGPT where openAI will sell it as the next best thing since sliced bread, with a puny little warning below the page. Even worse when you see all the "AI therapy" apps that pop up everywhere, where the user may think that the AI is as-good as a real therapist, but without any of the responsibility for the company in case of issues.

mothballed 6 days ago | parent [-]

Because sometimes you don't know with certainty on an individual basis, only a population basis, who is better off from the procedure.

Maybe you are the one in-a-billion who dies from a vaccine, from a disease that unknowably you would never contract or would only contract it in a mild way. The doctors know if they administer it enough times they will absolutely kill someone, but they do it to save the others, although they will pretty much never bluntly put it like that for your consideration.

FireBeyond 5 days ago | parent | prev [-]

If you refuse to disclose to your patient the risks and considerations, so they can have fully informed consent, it absolutely is malpractice.

mothballed 5 days ago | parent [-]

Consent can sometimes be a frustrated interpretation of it. Marissa Anderson missed an appointment (according to her, because she was feeling ill) after seeking a doctor's advice on dealing with some issues with her infant repeatedly vomiting and failure to take up food. Ultimately they figured out that the child was having trouble with some of the introduced food and would only take breast milk, but the doctors worked with police to have the child taken and "treated" against their will when she did not consent to show up to an appointment on that particular day.

During the forced intake the child was found to be acutely stable, with no emergent medical emergency (the kind often used to bypass consent)[], although underweight. The doctors tried to have the child put into foster care ( A doctor was recorded stating it was "not medically necessary" to transfer the baby, obfuscated as prolongation of medical care, to another location to assist in this attempt) to force the care that they wanted, but ultimately public demand (by Ammon Bundy, more notoriously known in association with 'militia' groups) forced them to relinquish on that pretty quickly.

So it can be very confusing with doctors. It is malpractice if they don't get your consent. But then also malpractice if they do.

[] https://freedomman.gs/wp-content/uploads/2025/03/st-lukes-2-...

FireBeyond 5 days ago | parent [-]

Alright, so leaving aside the other details of this story and attempting to be as neutral as possible:

What is linked here is PART of a PCR (Patient Care Report) from a BLS (Basic Life Support, i.e. EMT, someone with ~160 hours of training), working for a transport agency, or on an IFT (interfacility) unit.

"No interventions" doesn't mean the patient was not unwell. In fact, "Impression: Dehydration". It means that the patient was stable, and that no interventions would be required from the BLS provider (because BLS providers cannot start IV fluids, though in limited situations they can maintain them).

"No acute life threats noted". As an EMT, then paramedic, then critical care paramedic, I probably transported 8,000+ patients. In 7,500+ of those, I would have made the exact same statement on my PCR. In EMS, acute life threats are "things that have a possibility of killing the patient before they get to our destination facility/hospital". The times I've noted "acute life threats" are times I've transported GSW victims with severed femoral arteries, and patients having a STEMI or in full cardiac arrest. The vast majority of my Code 3 transports (i.e. lights/sirens to the closest facility) have not had "acute life threats".

The child's destination on this PCR was not foster care but to a higher level of care (St Lukes Regional Medical Center in Boise, versus the smaller facility in Meridian).

A few notes: "child was found to be acutely stable" - acutely stable is not a thing. Also, the general expectation for a lower acuity interfacility transport is that no interventions en route are required.

As I said, I don't know about the bigger scenario of this, but what I do know is EMS PCRs, and it is very common for people to latch on to certain phrases as "gotchas". We talked often in our PCRs about assessing "sick/not sick". Being "not sick" didn't mean you didn't have a medical issue, nor did it mean you didn't belong at a hospital; what it solely meant was "this is a patient that we need to prioritize transporting to definitive care versus attempting to stabilize on scene before doing so".

I did catch these two points which give me question:

> Now, I am about to show you empirical evidence that my Grandson, Baby Cyrus, was violently kidnapped by Meridian police, without cause and without evidence, that Baby Cyrus was falsely declared to be in “imminent danger,” even though CPS and St. Luke’s hospital admitted that he was not, and that my daughter and son-in-law were illegally prosecuted in secret, without due process

That sounds like the issue was with the police, not with medical malpractice. I'm skeptical, though, of "illegal secret prosecutions".

> Nancy had made speeches around the country in national forums and was completing a video exposing the lack of oversight in Georgia’s Department of Family and Child Services (DFCS) as well as Child Protective Services (CPS) nationally; and started to receive lots of death threats. Unfortunately her video was never published as her and her husband were murdered, being found shot to death in March 2010.

> Listen, if people on the streets will murder someone else for a $100 pair of Air Jordan’s, you better believe that they will murder a Senator who threatens an $80 billion child trafficking cash machine.

Okay, we're turning into conspiracy theories here. Multiple autopsies ruled this as a murder-suicide.

mothballed 5 days ago | parent [-]

The doctor says verbatim they are transferring the pt for foster care and it is not medically necessary. []

"What I want to do is admit this baby to Boise. Not because it is medically necessary ... but because then there is a few more degrees of seperation ... and then when they do not know, get the baby out with CPS to the foster parents"

There's no need to remind me about EMS in this case. I've been licensed in multiple states and obtained NREMT cert, and also transported pts. I'm well aware that for EMS purposes there would be absolutely zero implied consent to treat this baby without parental consent. In any case, part of the report is just quotes from the doctor who's own records reflect such.

>That sounds like the issue was with the police, not with medical malpractice. I'm skeptical, though, of "illegal secret prosecutions".

I'm not saying it is "malpractice" because malpractice has very specific statutory meaning. Acting within the bounds of the system, by removing consent, isn't legally malpractice. What doctors have done, is say "you better fucking do what I say, and give consent, or else I will call CPS, who will then ensure what I say is done, so do I have your consent?" That is not consent, because consent is not something that can be obtained under threat of violence of removing family members, even if you used a third party (CPS and armed men) to do it. That is just dictating the terms, and meting out punishment when the illusion of "consent" isn't obtained.

[] https://www.youtube.com/watch?v=8Du-jbE022I

FireBeyond 5 days ago | parent [-]

> The doctor says verbatim they are transferring the pt for foster care

The destination is literally a hospital. I'm not saying whether or not that was the ultimate goal, I'm just saying this PCR is a hospital to hospital transfer, with a higher level of care.

ACCount37 6 days ago | parent | prev [-]

Triage exists. And when the shit hits the fan, it takes the form of "this one isn't worth trying to save, but this one might be".

It's callous and cold, and it results in more lives saved than trying to save everyone.

Does ChatGPT, on the net, save more people than it dooms? Who knows. Plenty of anecdotes both ways, but we wouldn't have the reliable statistics for a long time.

habinero 6 days ago | parent | next [-]

Triage exists because in a mass casualty event, medical resources need to go to the most urgent cases first.

Triage is not a punch card that says if you drag 9 people out of a burning building, you can shoot someone on the street for free.

Orygin 6 days ago | parent | prev [-]

Sure, triage done by a human exist and they are generally absolved from taking that decision that could kill someone (to save others). Because the situation calls for it and would otherwise result in more deaths. But other than the situation calling for it, healthcare professionals are held to a certain standard and can be litigated against for medical malpractice.

That is not the case for ChatGPT apparently, and OpenAI should be held responsible for what their models do. They are very much aware of this because they did fine-tune GPT5 to avoid giving medical advice, even though it's still possible to work around.

joe_the_user 5 days ago | parent | prev [-]

The key word is separate. If someone kills someone through negligence or intention they're liable.

Now, if someone acts in a risky situation and kills someone rather than saving them, they can be OK. But in those situations, it has to be a sudden problem that comes up or the actor has to get "informed consent".

Someone who, unleashed a gas into the atmosphere that cured many people of disease but also killed a smaller number of people, would certainly be prosecuted (and, sure, there's a certain of HN poster who doesn't understand this).

aucisson_masque 5 days ago | parent | prev [-]

There is law and there is moral tho. Sometimes the two doesn’t coincide but I know which one I put on top of the other.

joe_the_user 5 days ago | parent | prev | next [-]

The law doesn't permit a life-saving doctor to be a serial killer on their days off "as long there's net life saving". But it does permit drugs that save many lives but might kill some people too. Agency matters to the law (and that's usually the proper approach imo).

The problem is the chat logs look a lot like ChatGPT is engaging in behavior a lot like a serial killer - it behaved like a person systematically seeking the goal of this kid killing himself (the logs are disturbing, fair warning).

Even more, the drugs that might save you or might kill you (theoretically) aren't sold over the counter but only prescribed by a doctor, who (again theoretically) is there to both make sure someone knows their choices and monitor the process.

fsw 6 days ago | parent | prev [-]

Are you suggesting that killing a few people is acceptable as long as the net result is positive? I don't think that's how the law works.

randyrand 6 days ago | parent | next [-]

seatbelts sometimes kill people, yet they're law.

the law certainly cares about net results.

coremoff 6 days ago | parent | prev | next [-]

It's the trolley problem reframed; not sure we have a definitive answer to that.

dpassens 6 days ago | parent [-]

No. Central to the trolley problem is that you're in a _runaway_ trolley. In this case, OpenAI not only chose to start the trolley, they also chose to not brake even when it became apparent that they were going to run somebody over.

coremoff 6 days ago | parent [-]

The tradeoff suggested above (not saying that it's the right way around or correct) is:

* If you provide ChatGPT then 5 people who would have died will live and 1 person who would have lived will die. ("go to the doctor" vs "don't tell anyone that you're suicidal")

* If you don't provide ChatGPT then 1 person who would have died will live and 5 people who would have lived will die.

Like many things, it's a tradeoff and the tradeoffs might not be obvious up front.

morpheos137 3 days ago | parent [-]

Thats a speculative argument and would be laughed out of court.

tick_tock_tick 6 days ago | parent | prev [-]

But it is the standard on how cures/treatments/drugs to manage issues like the ones in the article are judged by.

rideontime 6 days ago | parent | prev | next [-]

I completely agree and did not intend to absolve them of their guilt in any way. As far as I see it, this kid's blood is on Sam Altman's hands.

Pedro_Ribeiro 6 days ago | parent [-]

Curious to what you would think if this kid downloaded an open source model and talked to it privately.

Would his blood be on the hands of the researchers who trained that model?

idle_zealot 6 days ago | parent | next [-]

Then it's like cigarettes or firearms. As a distributor you're responsible for making clear the limitations, safety issues, etc, but assuming you're doing the distribution in a way that isn't overly negligent then the user becomes responsible.

If we were facing a reality in which these chat bots were being sold for $10 in the App Store, then running on end-user devices and no longer under the control of the distributors, but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible. As is, distributed on-device models are the purview of researchers and hobbyists and don't seem to be doing any harm at all.

Pedro_Ribeiro 6 days ago | parent | next [-]

Mhm but I don't believe inherently violent and dangerous things like guns and cigarretes are comparable to simple technology.

Should the creators of Tornado Cash be in prison for what they have enabled? You can jail them but the world can't go back, just like it can't go back when a new OSS model is released.

It is also much easier to crack down on illegal gun distribution than to figure out who uploaded the new model torrent or who deployed the latest zk innovation on Ethereum.

I don't think your hypothetical law will have the effects you think it will.

---

I also referenced this in another reply but I believe the government controlling what can go on a publicly distributed AI model is a dangerous path and probably inconstitucional.

rsynnott 6 days ago | parent | prev [-]

> but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible.

Or, I mean, just banning sale on the basis that they're unsafe devices and unfit for purpose. Like, you can't sell, say, a gas boiler that is known to, due to a design flaw, leak CO into the room; sticking a "this will probably kill you" warning on it is not going to be sufficient.

idle_zealot 6 days ago | parent [-]

In that extreme case the "packaging requirements" would be labeling the thing not as a boiler, but as dangerous explosive scrap.

rsynnott 6 days ago | parent [-]

I suspect in many places you couldn’t sell what would essentially be a device for poisoning people to consumers _at all_.

hattmall 6 days ago | parent | prev | next [-]

I would say no. Someone with the knowledge and motivation to do those things is far less likely to be overly influenced by the output and if they were they are much more aware of what exactly they are doing with regard to using the model.

Pedro_Ribeiro 6 days ago | parent [-]

So if a hypothetical open source enthusiast who fell in love with GPT-OSS and killed his real wife because the AI told him to should only be himself held accountable, where as if it were GPT-5 commanding him to commit the same crime, it would extend into OpenAI's responsability?

Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively.

On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

I suspect your suggestion will be how it ends up in Europe and get rejected in the US.

novok 6 days ago | parent | next [-]

After a certain point, people are responsible for what they do when they see certain words, especially words they know to be potentially inaccurate, fictional and have a lot of time to consider the actual reality of. A book is not responsible for people doing bad things, they are themselves.

AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

idle_zealot 6 days ago | parent [-]

> At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

You either think too highly of people, or too lowly of them. In any case, you're advocating for interning about 100 million individuals.

novok 6 days ago | parent [-]

It was a joke

teiferer 6 days ago | parent | prev | next [-]

> On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

That's not an obvious conclusion. One could make the same argument with physical weapons. "Regulating weapons is a path to hell paved with good intentions. Yesterday it was assault rifles, today it's hand guns and tomorrow it's your kitchen knife they are coming for." Europe has strict laws on guns, but everybody has a kitchen knife and lots of people there don't feel they live in hell. The U.S. made a different choice, and I'm not arguing that it's worse there (though many do, Europeans and even Americans), but it's certainly not preventing a supposed hell that would have broken out had guns in private hands been banned.

1718627440 6 days ago | parent | prev [-]

That's why you have this:

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If it would be fit for a purpose, then it's on the producer for ensuring it actually does. We have laws to prevent anyone from declaring their goods aren't fit for a particular purpose.
harmonic18374 6 days ago | parent | prev | next [-]

I'm not sure, but there is a difference: the researchers don't have much incentive to get everyone to use their model. As such, they're not really the ones hyping up AI as the future while ignoring shortcomings.

rideontime 6 days ago | parent | prev | next [-]

I specifically blame Sam Altman because of the allegations in the complaint that he ordered safety checks to be skipped in order to rush this model to market, specific safety checks that were later demonstrated to identify and prevent precisely this behavior.

salawat 6 days ago | parent | prev [-]

You build the tool, you're culpable ultimately. I've made it a rule in my life to conduct myself as if I will be held to account for everything I ultimately build, and it's externalities. Helps keep my nose cleaner. Still managed to work on some things that keep me up at night though.

brabel 6 days ago | parent | next [-]

That’s absolutely not how it works. Every license has a clause explicitly saying that the user is responsible for what they do with the tool. That’s just common sense. If it was the way you suggested no one would create tools for others anymore. If you buy the screw driver I sold and kill someone with it, I sure as hell have my conscience clean. In the ChatGPT case it’s different because the “tool” has the capacity to interact and potentially manipulate people psychologically, which is the only reason it’s not a clear cut case.

Pedro_Ribeiro 6 days ago | parent | prev | next [-]

That's a slippery slope! By that logic, you could argue that the creators of Tor, torrenting, Ethereum, and Tornado Cash should be held accountable for the countless vile crimes committed using their technology.

RyanHamilton 6 days ago | parent [-]

Legally I think not being responsible is the right decision. Morally I would hope everyone considers if they themselves are even partially responsible. As I look round at young people today and the tablet holding, media consuming youth programmers have created in order to get rich via advertising. I wish morals would get considered more often.

salawat 6 days ago | parent [-]

This, right here, is why I take the stance I do. Too many ethical blank checks get written ultimately if you don't keep the moral stain in place. If you make a surveillance tool, release it to the world that didn't have that capacity, and a dictator picks it up and rolls with it, that license of yours may absolve you in a court of law, but in the eyes of Root, you birthed it. You made the problem tractable. Not all problems were meant to be so. I used to not care about it as much. The last decade though has sharply changed my views though. It may very well be a lesson only learned with sufficient time and experience. I made my choice though. There are things I will not make/make easier. I will not be complicit in knowingly forging the bindings of the future. Maybe if we mature as a society as someday, but that day is not today.

novok 6 days ago | parent | prev [-]

So if you build a chair and then someone uses it to murder someone, are you responsible for the murder?

salawat 4 days ago | parent | next [-]

I am not willing to grant the same level of exculpation on the basis of reasonable forseeability to a software/AI dev that I am to a carpenter making a chair. The history of our trade has in it found great joy in how to use things in ways they were never intended, which much more than a carpenter, puts a burden on us to take the propensity and possibility of abuse into account amongst ourselves.

So no. Mr. Altman, and the people who made this chair, are in part responsible. You aren't a carpenter. You had a responsibility to the public to constrain this thing, and to be as ahead of it as humanly possible, and the number of AI as therapist startups I've seen in the past couple years, even just as passion projects from juniors I've trained, have been met with the same guiding wisdom: go no further. You are critically outside your depth, and you are creating a clear and evident danger to the public you are not yet mentally or sufficiently with it enough to mitigate all the risks of.

If I can get there it's pretty damn obvious.

mothballed 6 days ago | parent | prev [-]

You're not even responsible if you build an AR-15, complete with a bunch of free advertisements from the US army using the select-fire variant to slaughter innocent Iraqis, and it's used to murder someone.

The government will require them to add age controls and that will be that.

wredcoll 6 days ago | parent | prev | next [-]

That's a great point. So often we attempt to place responsibility on machines that cannot have it.

joe_the_user 6 days ago | parent | prev | next [-]

The frame will immediately shift to that frame if this enters legal proceedings. The law always views things as you say - only people have agency.

hliyan 6 days ago | parent [-]

I predict the OpenAI legal team will argue that if a person should be held responsible, it should be the person who originally wrote the content about suicide that their LLM was trained on, and that the LLM is just a mechanism that passes the knowledge through. But if they make this argument, then some of their copyright arguments would be in jeopardy.

slipperydippery 6 days ago | parent | prev | next [-]

They have some responsibility because they’re selling and framing these as more than the better-tuned variant on Markov chain generators that they in fucking fact are, while offering access to them to anybody who signs up while knowing that many users misunderstand what they’re dealing with (in part because these companies’ hype-meisters, like Altman, are bullshitting us)

idle_zealot 6 days ago | parent [-]

No, that's the level of responsibility they ought to have if they were releasing these models as products. As-is they've used a service model, and should be held to the same standards as if there were a human employee on the other end of the chat interface. Cut through the technical obfuscation. They are 100% responsible for the output of their service endpoints. This isn't a case of making a tool that can be used for good or ill, and it's not them providing some intermediary or messaging service like a forum with multiple human users and limited capacity for moderation. This is a direct consumer to business service. Treating it as anything else will open the floodgates to slapping an "AI" label on anything any organization doesn't want to be held accountable for.

slipperydippery 6 days ago | parent [-]

I like this framing even better.

This is similar to my take on things like Facebook apparently not being able to operate without psychologically destroying moderators. If that’s true… seems like they just shouldn’t operate, then.

If you’re putting up a service that you know will attempt to present itself as being capable of things it isn’t… seems like you should get in a shitload of trouble for that? Like maybe don’t do it at all? Maybe don’t unleash services you can’t constrain in ways that you definitely ought to?

blackqueeriroh 6 days ago | parent [-]

But understand that things like Facebook not operating doesn’t actually make the world any safer. In fact, it makes it less safe, because the same behavior is happening on the open internet and nobody is moderating it.

drw85 6 days ago | parent [-]

I don't think this is true anymore.

Facebook have gone so far down the 'algorithmic control' rabbit hole, it would most definitely be better if they weren't operating anymore.

They destroy people that don't question things with their algorithm driven bubble of misinformation.

guenthert 6 days ago | parent | prev | next [-]

Er, it's a piece of code, dude, regardless how complex it might be. It isn't accountable for its actions. It's the people who commissioned, created and tested (or should have) it who are responsible.

bell-cot 6 days ago | parent | prev | next [-]

Yeah...but rather than get into ever-fancier legal and philosophical arguments about the LLM's agency, I'd like to see the justice system just rotate the table:

"The court agrees with your argument that you are not responsible for the horrible things that happened to the victim, as a consequence of your LLM's decisions. But similarly, the court will not be responsible for the horrible things that will be happening to you, because our LLM's decisions."

(No - it doesn't much matter whether that is actually done. Vs. used as a rhetorical banhammer, to shut down the "we're not responsible" BS.)

notachatbot123 6 days ago | parent | prev | next [-]

I so agree very much. There is no reason for LLMs to be designed as human-like chat companions, creating a false sense of untechnology.

blackqueeriroh 6 days ago | parent [-]

There are absolutely reasons for LLMs to be designed as human-like chat companions, starting with the fact that they’re trained on human speech and behavior, and what they do is statistically predict the most likely next token, which means they will statistically sound and act much like a human.

notachatbot123 4 days ago | parent [-]

That's not a requirement for LLMs. Training can be done differently.

ruraljuror 6 days ago | parent | prev [-]

I agree with your larger point, but I don't understand what you mean the LLM doesn’t do anything? LLMs do do things and they can absolutely have agency (hence all the agents being released by AI companies).

I don’t think this agency absolves companies of any responsibility.

MattPalmer1086 6 days ago | parent [-]

An LLM does not have agency in the sense the OP means. It has nothing to do with agents.

It refers to the human ability to make independent decisions and take responsibility for their actions. An LLM has no agency in this sense.

ruraljuror 6 days ago | parent [-]

If you confine agency to something only humans can have, which is “human agency,” then yes of course LLMs don’t have it. But there is a large body of philosophical work studying non-human agency, and it is from this characteristic of agency that LLM agents take their name. Hariri argues that LLMs are the first technology that are agents. I think saying that they “can’t do things” and are not agents misunderstands them and underestimates their potential.

MattPalmer1086 6 days ago | parent | next [-]

LLMs can obviously do things, so we don't disagree there; I didn't argue they couldn't do things. They can definitely act as agents of their operator.

However, I still don't think LLMs have "agency", in the sense of being capable of making choices and taking responsibility for the consequences of them. The responsibility for any actions undertaken by them still reside outside of themselves; they are sophisticated tools with no agency of their own.

If you know of any good works on nonhuman agency I'd be interested to read some.

o11c 6 days ago | parent | prev [-]

That's completely missing the point of agency.

A slave lacks agency, despite being fully human and doing work. This is why almost every work of fiction involving slaves makes for terrible reading - because as readers, agency is the thing we demand from a story.

Or, for games that are fully railroaded - the problem is that the players lack agency, even though they are fully human and taking action. Games do try to come up with ways to make it feel like there is more agency than there really is (because The Dev Team Thinks of Everything is hard work), but even then - the most annoying part of the game is when you hit that wall.

Theoretically an AI could have agency (this is independent of AI being useful). But since I have yet to see any interesting AI, I am extremely skeptical of it happening before nuclear fusion becomes profitable.

hypertele-Xii 6 days ago | parent [-]

Slaves do not lack agency. That's one of the reasons the Bible, specifically Exodus, is such a thrilling read.

throwawaysoxjje 6 days ago | parent [-]

A story where slaves escape through deus ex machina is probably not exactly a great example

hypertele-Xii 18 hours ago | parent [-]

Deus ex machina refers to a rhetoric device in fiction, which the Bible is not.