| ▲ | observationist 11 hours ago |
| To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. "It's my robot, it wasn't me" isn't a compelling defense - if you can prove that it behaved significantly outside of your informed or contracted expectations, then maybe the AI platform or the Robot developer could be at fault. Given the current state of AI, though, I think it's not unreasonable to expect that any bot can go rogue, that huge and trivially accessible jailbreak risks exist, so there's no excuse for deploying an agent onto the public internet to do whatever it wants outside direct human supervision. If you're running moltbot or whatever, you're responsible for what happens, even if the AI decided the best way to get money was to hack the Federal Reserve and assign a trillion dollars to an account in your name. Or if Grok goes mechahitler and orders a singing telegram to Will Stancil's house, or something. These are tools; complex, complicated, unpredictable tools that need skillfull and careful use. There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items. https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.or... They bought some ecstasy, a hungarian passport, and random other items from Agora. >The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog. In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability. |
|
| ▲ | thayne an hour ago | parent | next [-] |
| That's how it should work. I'm not sure it will actually work that way in real courts. At least not consistently. |
|
| ▲ | dragonwriter 10 hours ago | parent | prev | next [-] |
| > To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. For most crimes, this is circular, because whether a crime occurred depends on whether a person did the requisite act of the crime with the requisite mental state. A crime is not an objective thing independent of an actor that you can determine happened as a result of a tool and then conclude guilt for based on tool use. And for many crimes, recklessness or negligence as mental states are not sufficient for the crime to have occurred. |
| |
| ▲ | rmunn 7 hours ago | parent [-] | | For negligence that results in the death of a human being, many legal systems make a distinction between negligent homicide and criminally negligent homicide. Where the line is drawn depends on a judgment call, but in general you're found criminally negligent if your actions are completely unreasonable. A good example might be this. In one case, a driver's brakes fail and he hits and kills a pedestrian crossing the street. It is found that he had not done proper maintenance on his brakes, and the failure was preventable. He's found liable in a civil case, because his negligence led to someone's death, but he's not found guilty of a crime, so he won't go to prison. A different driver was speeding, driving at highway speeds through a residential neighborhood. He turns a corner and can't stop in time to avoid hitting a pedestrian. He is found criminally negligent and goes to prison, because his actions were reckless and beyond what any reasonable person would do. The first case was ordinary negligence: still bad because it killed someone, but not so obviously stupid that the person should be in prison for it. The second case is criminal negligence, or in some legal systems it might be called "reckless disregard for human life". He didn't intend to kill anyone, but his actions were so blatantly stupid that he should go to prison for causing the pedestrian's death. |
|
|
| ▲ | b00ty4breakfast 11 hours ago | parent | prev | next [-] |
| that darknet bot one always confuses me. The artists/programmers/whatever specifically instructed the computer, through the bot, to perform actions that would likely result in breaking the law. It's not a side-effect of some other, legal action which they were trying to accomplish, it's entire purpose was to purchase things on a marketplace known for hosting illegal goods and services. If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that. |
|
| ▲ | cess11 10 hours ago | parent | prev [-] |
| "computer culpability" That idea is really weird. Culpa (and dolus) in occidental law is a thing of the mind, what you understood or should have understood. A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart. |
| |
| ▲ | Muromec 9 hours ago | parent | next [-] | | >A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart. We as a society, for our own convenience can choose to believe that LLM does have a mind and can understand results of it's actions. The second part doesn't really follow. Can you even hurt LLM in a way that is equivalent to murdering a person? Evicting it off my computer isn't necessarily a crime. It would be good news if the answer was yes, because then we just need to find a convertor of camel amounts to dollar amounts and we are all good. Can LLM perceive time in a way that allows imposing an equivalent of jail time? Is the LLM I'm running on my computer the same personality as the one running on yours and should I also shut down mine when yours acted up? Do we even need the punishment aspect of it and not just rehabilitation, repentance and retraining? | | |
| ▲ | Wobbles42 9 hours ago | parent | next [-] | | The only hallucination here is the idea that giant equation is a mind. | | |
| ▲ | Muromec 8 hours ago | parent [-] | | It's only a hallucination if you are the only one seeing it. Otherwise the line between that, a social construct and a religious belief is a bit blurry. |
| |
| ▲ | cess11 an hour ago | parent | prev [-] | | This type of religious gobbledygook is not a sound foundation for regulating state violence. |
| |
| ▲ | observationist 9 hours ago | parent | prev [-] | | Yeah - I'm pretty sure, technically, that current AI isn't conscious in any meaningful way, and even the agentic scaffolding and systems put together lack any persistent, meaningful notion of "mind", especially in a legal sense. There are some newer architectures and experiments with the subjective modeling and "wiring" that I'd consider solid evidence of structural consciousness, but for now, AI is a tool. It also looks like we can make tools arbitrarily intelligent and competent, and we can extend the capabilities to superhuman time scales, so I think the law needs to come up with an explicit precedent for "This person is the user of the tool which did the bad thing" - it could be negligent, reckless, deliberate, or malicious, but I don't think there's any credibility to the idea that "the AI did it!" At worst, you would confer liability to the platform, in the case of some sort of blatant misrepresentation of capabilities or features, but absolutely none of the products or models currently available withstand any rational scrutiny into whether they are conscious or not. They at most can undergo a "flash" of subjective experience, decoupled from any coherent sequence or persistent phenomenon. We need research and legitimate, scientific, rational definitions for agency and consciousness and subjective experience, because there will come a point where such software becomes available, and it not only presents novel legal questions, but incredible moral and ethical questions as well. Accidentally oopsing a torment nexus into existence with residents possessed of superhuman capabilities sounds like a great way to spark off the first global interspecies war. Well, at least since the Great Emu War. If we lost to the emus, we'll have no chance against our digital offspring. A good lawyer will probably get away with "the AI did it, it wasn't me!" before we get good AI law, though. It's too new and mysterious and opaque to normal people. | | |
| ▲ | cess11 an hour ago | parent [-] | | It's just a database. It is not intelligent. It's ability for consciousness is the same as Gandalf's. |
|
|