| ▲ | Ask HN: Should AI agents have their own legal entities? | |||||||||||||
| 2 points by LRG-H 10 hours ago | 7 comments | ||||||||||||||
When an agent spends money or creates liability, who's responsible? Personal accounts are risky and manual LLCs don't really scale? | ||||||||||||||
| ▲ | dlcarrier 10 hours ago | parent | next [-] | |||||||||||||
When you hire a tax accountant or a lawyer, you are liable for everything they do in your name. Considering that hiring highly educated and highly paid humans doesn't protect you from liability, even for their mistakes, there's no way that it could be easily done with a computer program. | ||||||||||||||
| ▲ | dmilicev2 9 hours ago | parent | prev | next [-] | |||||||||||||
AI is not sentient, therefore not liable for its actions. It would be pointless. We ought to look one step beyond the lines of code for liability. Unless the AI eventually passes the bar of sentience. | ||||||||||||||
| ||||||||||||||
| ▲ | speakingmoistly 4 hours ago | parent | prev | next [-] | |||||||||||||
> When an agent spends money or creates liability, who's responsible? Whoever is operating it (as in, you or the entity providing you the service if you're using someone else's thing). If I operate a coffee machine accessible to others and it injures someone, I'm on the hook; the same logic applies to an agent. At the end of the day, LLMs are tools, and whoever is overseeing it is vouching for it (and paying the price if it misbehaves). > Personal accounts are risky and manual LLCs don't really scale? When it comes to personal accounts, I assume you're talking about use cases like "I use an agent in my personal capacity to do things and it made a mistake". This sounds like you're shouldering the risk and accept the potential consequences. That's just a case of "I did something risky, and found out". As for the manual LLC bit, I'm assuming you are thinking about an agent as part of a business. In that case, whoever is operating the business is on the hook. The idea that agents should be legal entities just sounds like an attempt at shifting blame away for doing risky things knowingly. | ||||||||||||||
| ▲ | yawpitch 9 hours ago | parent | prev [-] | |||||||||||||
Why would it be a good idea - or, indeed, just an idea — to shield the promptor from the prompt’s consequences? If you think about it to any great degree what you’re asking is should Two Face’s coin should be held responsible for which face it lands on? | ||||||||||||||
| ||||||||||||||