Remix.run Logo
cess11 8 hours ago

"computer culpability"

That idea is really weird. Culpa (and dolus) in occidental law is a thing of the mind, what you understood or should have understood.

A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.

Muromec 7 hours ago | parent | next [-]

>A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.

We as a society, for our own convenience can choose to believe that LLM does have a mind and can understand results of it's actions. The second part doesn't really follow. Can you even hurt LLM in a way that is equivalent to murdering a person? Evicting it off my computer isn't necessarily a crime.

It would be good news if the answer was yes, because then we just need to find a convertor of camel amounts to dollar amounts and we are all good.

Can LLM perceive time in a way that allows imposing an equivalent of jail time? Is the LLM I'm running on my computer the same personality as the one running on yours and should I also shut down mine when yours acted up? Do we even need the punishment aspect of it and not just rehabilitation, repentance and retraining?

Wobbles42 6 hours ago | parent [-]

The only hallucination here is the idea that giant equation is a mind.

Muromec 6 hours ago | parent [-]

It's only a hallucination if you are the only one seeing it. Otherwise the line between that, a social construct and a religious belief is a bit blurry.

observationist 7 hours ago | parent | prev [-]

Yeah - I'm pretty sure, technically, that current AI isn't conscious in any meaningful way, and even the agentic scaffolding and systems put together lack any persistent, meaningful notion of "mind", especially in a legal sense. There are some newer architectures and experiments with the subjective modeling and "wiring" that I'd consider solid evidence of structural consciousness, but for now, AI is a tool. It also looks like we can make tools arbitrarily intelligent and competent, and we can extend the capabilities to superhuman time scales, so I think the law needs to come up with an explicit precedent for "This person is the user of the tool which did the bad thing" - it could be negligent, reckless, deliberate, or malicious, but I don't think there's any credibility to the idea that "the AI did it!"

At worst, you would confer liability to the platform, in the case of some sort of blatant misrepresentation of capabilities or features, but absolutely none of the products or models currently available withstand any rational scrutiny into whether they are conscious or not. They at most can undergo a "flash" of subjective experience, decoupled from any coherent sequence or persistent phenomenon.

We need research and legitimate, scientific, rational definitions for agency and consciousness and subjective experience, because there will come a point where such software becomes available, and it not only presents novel legal questions, but incredible moral and ethical questions as well. Accidentally oopsing a torment nexus into existence with residents possessed of superhuman capabilities sounds like a great way to spark off the first global interspecies war. Well, at least since the Great Emu War. If we lost to the emus, we'll have no chance against our digital offspring.

A good lawyer will probably get away with "the AI did it, it wasn't me!" before we get good AI law, though. It's too new and mysterious and opaque to normal people.