Remix.run Logo
nerdsniper 9 hours ago

The distinction some people are making is between copy/pasting text vs agentic action. Generally mistakes "work product" as in output from ChatGPT that the human then files with a court, etc. are not forgiven, because if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own that a "reasonable person" would not have expected it to. Often we forgive those kinds of software bloopers.

Wobbles42 6 hours ago | parent | next [-]

"Agentic action" is just running a script. All that's different is now people are deploying scripts that they don't understand and can't predict the outcome of.

It's negligence, pure and simple. The only reason we're having this discussion is that a trillion dollars was spent writing said scripts.

iterance 2 hours ago | parent | prev | next [-]

If I hire an engineer and that engineer authorizes an "agent" to take an action, if that "agentic action" then causes an incident, guess whose door I'm knocking on?

Engineers are accountable for the actions they authorize. Simple as that. The agent can do nothing unless the engineer says it can. If the engineer doesn't feel they have control over what the agent can or cannot do, under no circumstances should it be authorized. To do so would be alarmingly negligent.

This extends to products. If I buy a product from a vendor and that product behaves in an unexpected and harmful manner, I expect that vendor to own it. I don't expect error-free work, yet nevertheless "our AI behaved unexpectedly" is not a deflection, nor is it satisfactory when presented as a root cause.

ori_b 9 hours ago | parent | prev | next [-]

If you put a brick on the accelerator of a car and hop out, you don't get to say "I wasn't even in the car when it hit the pedestrian".

Shalomboy 9 hours ago | parent [-]

This is true for bricks, but it is not true if your dog starts up your car and hits a pedestrian. Collisions caused by non-human drivers are a fascinating edge case for the times we're in.

jacquesm 7 hours ago | parent | next [-]

It is very much true for dogs in that case: (1) it is your dog (2) it is your car (3) it is your responsibility to make sure your car can not be started by your dog (4) the pedestrian has a reasonable expectation that a vehicle that is parked without a person in it has been made safe to the point that it will not suddenly start to move without an operator in it and dogs don't qualify.

You'd lose that lawsuit in a heartbeat.

direwolf20 7 hours ago | parent [-]

what if your car was parked in a normal way that a reasonable person would not expect to be able to be started by a dog, but the dog did several things that no reasonable person would expect and started it anyway?

jacquesm 7 hours ago | parent | next [-]

You can 'what if' this until the cows come home but you are responsible, period.

I don't know what kind of drivers education you get where you live but where I live and have lived one of the basic bits is that you know how to park and lock your vehicle safely and that includes removing the ignition key (assuming your car has one) and setting the parking brake. You aim the wheels at the kerb (if there is one) when you're on an incline. And if you're in a stick shift you set the gear to neutral (in some countries they will teach you to set the gear to 1st or reverse, for various reasons).

We also have road worthiness assessments that ensure that all these systems work as advertised. You could let a pack of dogs loose in my car in any external circumstance and they would not be able to move it, though I'd hate to clean up the interior afterwards.

direwolf20 6 hours ago | parent [-]

I agree. The dog smashed the window, hot–wired the ignition, released the parking brake, shifted to drive, and turned the wheel towards the opposite side of the road where a mother was pushing a stroller, killing the baby. I know, crazy right, but I swear I'm not lying, the neighbor caught it on camera.

Who's liable?

I think this would be a freak accident. Nobody would be liable.

bigstrat2003 29 minutes ago | parent | next [-]

Your analogy has long since ceased to have any illuminating power, because it involves things that are straight up impossible.

rdtsc 2 hours ago | parent | prev | next [-]

Well at that point we might as well say it's gremlins that you summoned, so who knows, there are no laws about gremlins hot-wiring cars. If you summoned them, are they _your_ gremlins, or do they have their own agency. How guilty are you, really... At some point it becomes a bit silly to go into what-if scenarios, it helps to look at exact cases.

jacquesm 6 hours ago | parent | prev | next [-]

> I agree. The dog smashed the window, hot–wired the ignition, > released the parking brake, shifted to drive, and turned the > wheel towards the opposite side of the road where a mother was > pushing a stroller, killing the baby. I know, crazy right, but > I swear I'm not lying, the neighbor caught it on camera.

> Who's liable?

You are. It's still your dog. If you would replace dog with child the case would be identical (but more plausible). This is really not as interesting as you think it is. The fact that you have a sentient dog is going to be laughed out of court and your neighbor will be in the docket together with you for attempting to mislead the court with your AI generated footage. See, two can play at that.

When you make such ridiculously contrived examples turnaround is fair play.

gamblor956 5 hours ago | parent | prev [-]

You would not be guilty of a crime, because that requires intent.

But you would be liable for civil damages, because that does not. There are multiple theories for which to establish liability, but most likely this would be treated as negligence.

thatjoeoverthr 4 hours ago | parent | prev [-]

You're stretching it. It's more like if you train your dog to start the car and accelerate, open the door and turn your back.

Everything an AI does is downstream of deliberate, albeit imperfect, training.

You know this, you rig it all up and you let things happen.

Terr_ 2 hours ago | parent | prev | next [-]

Being guilty != Being responsible

They correlate, but we must be careful not to mistake one for the other. The latter is a lower bar.

b00ty4breakfast 5 hours ago | parent | prev | next [-]

I'm dubious, do you have any examples of this happening?

victorbjorklund 9 hours ago | parent | prev | next [-]

I don’t know where you from but at least in Sweden you have strict liability for anything your dog does

9 hours ago | parent | prev | next [-]
[deleted]
ori_b 8 hours ago | parent | prev | next [-]

In the USA, at least, it seems pet owners are liable for any harm their pets do.

cess11 8 hours ago | parent | prev | next [-]

Legally, in a lot of jurisdictions, a dog is just your property. What it does, you did, usually with presumed intent or strict liability.

gowld 8 hours ago | parent [-]

What if you planted a bush that attracted a bat that bit a child?

Muromec 7 hours ago | parent | next [-]

What if you have an email in your inbox warning you that 1) this specific bush attracts bats and 2) there were in fact bats seen near you bush and 3) bats were observed almost biting a child before. And you also have "how do I fuck up them kids by planting a bush that attracts bats" in your browser history. It's a spectrum you know.

dragonwriter 7 hours ago | parent | prev | next [-]

Well, if it was a bush known to also attract children, it was on your property, and the child was in fact attracted by it and also on your property, and the presence of the bush created the danger of bat bites, the principal of “attractive nuisance” is in play.

b00ty4breakfast 5 hours ago | parent | prev [-]

what if my auntie had wheels, would she be a wagon?

freejazz 9 hours ago | parent | prev [-]

Prima facie negligence = liability

observationist 9 hours ago | parent | prev | next [-]

To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. "It's my robot, it wasn't me" isn't a compelling defense - if you can prove that it behaved significantly outside of your informed or contracted expectations, then maybe the AI platform or the Robot developer could be at fault. Given the current state of AI, though, I think it's not unreasonable to expect that any bot can go rogue, that huge and trivially accessible jailbreak risks exist, so there's no excuse for deploying an agent onto the public internet to do whatever it wants outside direct human supervision. If you're running moltbot or whatever, you're responsible for what happens, even if the AI decided the best way to get money was to hack the Federal Reserve and assign a trillion dollars to an account in your name. Or if Grok goes mechahitler and orders a singing telegram to Will Stancil's house, or something. These are tools; complex, complicated, unpredictable tools that need skillfull and careful use.

There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items.

https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.or...

They bought some ecstasy, a hungarian passport, and random other items from Agora.

>The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog.

In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability.

dragonwriter 7 hours ago | parent | next [-]

> To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime.

For most crimes, this is circular, because whether a crime occurred depends on whether a person did the requisite act of the crime with the requisite mental state. A crime is not an objective thing independent of an actor that you can determine happened as a result of a tool and then conclude guilt for based on tool use.

And for many crimes, recklessness or negligence as mental states are not sufficient for the crime to have occurred.

rmunn 5 hours ago | parent [-]

For negligence that results in the death of a human being, many legal systems make a distinction between negligent homicide and criminally negligent homicide. Where the line is drawn depends on a judgment call, but in general you're found criminally negligent if your actions are completely unreasonable.

A good example might be this. In one case, a driver's brakes fail and he hits and kills a pedestrian crossing the street. It is found that he had not done proper maintenance on his brakes, and the failure was preventable. He's found liable in a civil case, because his negligence led to someone's death, but he's not found guilty of a crime, so he won't go to prison. A different driver was speeding, driving at highway speeds through a residential neighborhood. He turns a corner and can't stop in time to avoid hitting a pedestrian. He is found criminally negligent and goes to prison, because his actions were reckless and beyond what any reasonable person would do.

The first case was ordinary negligence: still bad because it killed someone, but not so obviously stupid that the person should be in prison for it. The second case is criminal negligence, or in some legal systems it might be called "reckless disregard for human life". He didn't intend to kill anyone, but his actions were so blatantly stupid that he should go to prison for causing the pedestrian's death.

b00ty4breakfast 9 hours ago | parent | prev | next [-]

that darknet bot one always confuses me. The artists/programmers/whatever specifically instructed the computer, through the bot, to perform actions that would likely result in breaking the law. It's not a side-effect of some other, legal action which they were trying to accomplish, it's entire purpose was to purchase things on a marketplace known for hosting illegal goods and services.

If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that.

cess11 8 hours ago | parent | prev [-]

"computer culpability"

That idea is really weird. Culpa (and dolus) in occidental law is a thing of the mind, what you understood or should have understood.

A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.

Muromec 7 hours ago | parent | next [-]

>A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.

We as a society, for our own convenience can choose to believe that LLM does have a mind and can understand results of it's actions. The second part doesn't really follow. Can you even hurt LLM in a way that is equivalent to murdering a person? Evicting it off my computer isn't necessarily a crime.

It would be good news if the answer was yes, because then we just need to find a convertor of camel amounts to dollar amounts and we are all good.

Can LLM perceive time in a way that allows imposing an equivalent of jail time? Is the LLM I'm running on my computer the same personality as the one running on yours and should I also shut down mine when yours acted up? Do we even need the punishment aspect of it and not just rehabilitation, repentance and retraining?

Wobbles42 6 hours ago | parent [-]

The only hallucination here is the idea that giant equation is a mind.

Muromec 6 hours ago | parent [-]

It's only a hallucination if you are the only one seeing it. Otherwise the line between that, a social construct and a religious belief is a bit blurry.

observationist 7 hours ago | parent | prev [-]

Yeah - I'm pretty sure, technically, that current AI isn't conscious in any meaningful way, and even the agentic scaffolding and systems put together lack any persistent, meaningful notion of "mind", especially in a legal sense. There are some newer architectures and experiments with the subjective modeling and "wiring" that I'd consider solid evidence of structural consciousness, but for now, AI is a tool. It also looks like we can make tools arbitrarily intelligent and competent, and we can extend the capabilities to superhuman time scales, so I think the law needs to come up with an explicit precedent for "This person is the user of the tool which did the bad thing" - it could be negligent, reckless, deliberate, or malicious, but I don't think there's any credibility to the idea that "the AI did it!"

At worst, you would confer liability to the platform, in the case of some sort of blatant misrepresentation of capabilities or features, but absolutely none of the products or models currently available withstand any rational scrutiny into whether they are conscious or not. They at most can undergo a "flash" of subjective experience, decoupled from any coherent sequence or persistent phenomenon.

We need research and legitimate, scientific, rational definitions for agency and consciousness and subjective experience, because there will come a point where such software becomes available, and it not only presents novel legal questions, but incredible moral and ethical questions as well. Accidentally oopsing a torment nexus into existence with residents possessed of superhuman capabilities sounds like a great way to spark off the first global interspecies war. Well, at least since the Great Emu War. If we lost to the emus, we'll have no chance against our digital offspring.

A good lawyer will probably get away with "the AI did it, it wasn't me!" before we get good AI law, though. It's too new and mysterious and opaque to normal people.

kazinator 8 hours ago | parent | prev | next [-]

That's the same thing. You signed off on the agent doing things on your behalf; you are responsible.

If you gave a loaded gun to a five year old, would "five-year-old did it" be a valid excuse?

Wobbles42 6 hours ago | parent [-]

If the five year old was a product resulting from trillions of dollars in investments, and the marketability of that product required people to be able to hand guns to that five year old without liability, then we would at least be having that discussion.

Purely organically of course.

Terr_ 2 hours ago | parent [-]

> If the five year old was a product resulting from trillions of dollars in investments

In weird way, that's actually true. It's a highly- (soon to be fully-) autonomous giga-swarm of the most complicated nanobots in existence, the result of investments over hundreds of thousands of years.

That said, we don't really get to choose which ones we own, although we do have input on their maintenance. :p

niyikiza 8 hours ago | parent | prev | next [-]

> if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own

Yeah that's exactly the I think we should adopt for AI agent tool calls as well: cryptographically signed, task scoped "warrants" that can be traceable even in cases of multi-agent delegation chains

embedding-shape 8 hours ago | parent | next [-]

Kind of like https://github.com/cursor/agent-trace but cryptographically signed?

> Agent Trace is an open specification for tracking AI-generated code. It provides a vendor-neutral format for recording AI contributions alongside human authorship in version-controlled codebases.

niyikiza 8 hours ago | parent [-]

Similar space, different scope/Approach. Tenuo warrants track who authorized what across delegation chains (human to agent, agent to sub-agent, sub-agent to tool) with cryptographic proof & PoP at each hop. Trace tracks provenance. Warrants track authorization flow. Both are open specs. I could see them complementing each other.

Muromec 7 hours ago | parent | prev [-]

Why does it need cryptography even? If you gave the agent a token to interact with your bank account, then you gave it permission. If you want to limit the amount it is allowed to sent and a list of recipients, put a filter that sits between the account and the agent that enforces it. If you want the money to be sent only based on the invoice, let the filter check that invoice reference is provided by the agent. If you did neither of that and the platform that runs the agents didn't accept the liability, it's on you. Setting up filters and engineering prompts it's on you too.

Now if you did all of that, but made a bug in implementing the filter, then you at least tried and wasn't negligible, but it's on you.

niyikiza 6 hours ago | parent | next [-]

Tokens + filters work for single-agent, single-hop calls. Gets murky when orchestrators spawn sub-agents that spawn tools. Any one of them can hallucinate or get prompt-injected. We're building around signed authorization artifacts instead. Each delegation is scoped and signed, chains are verifiable end-to-end. Deterministic layer to constrain the non-deterministic nature of LLMs.

Muromec 6 hours ago | parent [-]

>We're building around signed authorization artifacts instead. Each delegation is scoped and signed, chains are verifiable end-to-end. Deterministic layer to constrain the non-deterministic nature of LLMs.

Ah, I get it. So the token can be downscoped to be passed, like the pledge thing, so sub agent doesn't exceed the scope of it's parent. I have a feeling, that it's like cryptography in general -- you get one problem and reduce it to key management problem.

In a more practical sense, if the non-deterministic layer decides what the reduced scope should be, all delegations can become "Allow: *" in the most pathological case, right? Or like play store, where a shady calculator app can have a permission to read your messages. Somebody has to review those and flag excessive grants.

niyikiza 5 hours ago | parent [-]

Right, the non-deterministic layer can't be the one deciding scope. That's the human's job at the root.

The LLM can request a narrower scope, but attenuation is monotonic and enforced cryptographically. You can't sign a delegation that exceeds what you were granted. TTL too: the warrant can't outlive its parent.

So yes, key management. But the pathological "Allow: *" has to originate from a human who signed it. That's the receipt you're left holding.

You're poking at the right edges though. UX for scope definition and revocation propagation are what we're working through now. We're building this at tenuo.dev if you want to dig in the spec or poke holes.

Wobbles42 6 hours ago | parent | prev [-]

How can you give an agent a token without cryptography being involved?

Muromec 6 hours ago | parent [-]

Not every access token is a (public) key or a signed object. It may be, but it doesn't have to. It's not state of the art, but also not unheard of to use a pre-shared secret with no cryptography involved and to rely on presenting the secret itself with each request. Cookie sessions are often like that.

jacquesm 7 hours ago | parent | prev | next [-]

If you signed the document you are responsible for its content, you are most likely not the owner of it.

IG_Semmelweiss 4 hours ago | parent | prev [-]

Actually, things are heading in a good direction re:AI bloopers.

Courts of law have already found that AI interactions with customers are binding, even if said interactions are considered "bloopers" by the vendor[1]

[1] https://www.forbes.com/sites/marisagarcia/2024/02/19/what-ai...