| > That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to form binding contracts. It's not that straightforward. A contract, at heart, is an agreement between two parties, both of whom must have (among other things) reasonable material reliance in each other that they were either the principals themselves or were operating under the authority of their principal. I am sure that Air Canada did not intend to give the autonomous customer service agent the authority to make the false promises that it did. But it did so anyway by not constraining its behavior. > It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that. I don't think that's necessarily correct. I believe the law (again, not legal advice) would bind the seller to the agent's price mistake unless 1/the customer knew it was a mistake and tried to take advantage of it anyway or 2/the price was so outlandish that no reasonable person would believe it. That said, there's often a wide gap between what the law requires and what actually happens. Nobody's going to sue over a $10 price mistake. |
| |
| ▲ | mindslight 4 hours ago | parent [-] | | Yes, but neither airline agents nor LLM agents hold themselves out as having legal authority to bind their principals in general contracts. To the extent you could get an LLM to state such a thing, it would be specious and still not binding. Someone calling the airline support line and assuming the airline agent is authorized to form general contracts doesn't change the legal situation where they are not, right? Fundamentally, running `sdkmanager --licenses` does not consummate a contract [0]. Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms. If an LLM does this unbeknownst to a user, this just means there is one less indication that the user is aware of the license. Of course this butts up against the limits to litigation you pointed out, which is why contracts of adhesion mostly revolve around making users disclaim legal rights, and upholding copyright (which can be enforced out of band on the scale it starts to matter). [0] if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed. (I edited the sentence you quoted slightly, to make it more explicit. I don't think it changes anything but if it does then I am sorry) | | |
| ▲ | otterley 3 hours ago | parent [-] | | > neither airline agents nor LLM agents hold themselves out as having legal authority to bind their principals in general contracts. You don't have to explicitly hold yourself out as an agent to be treated as one. Circumstances matter. There's an "apparent authority" doctrine of agency law I'd encourage you to study. > Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms. Yup, that's a contract of adhesion, and so-called "click-wrap" agreements can be valid contracts. See e.g. https://www.goodwinlaw.com/en/insights/publications/2022/08/... > if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed. Courts tend not to take kindly to "hacking attempts" like this, and you could find yourself liable for copyright infringement, trespass to chattels, or possibly even criminal charges under CFAA if you do. Let me put it this way: U.S. and English law are stacked squarely in favor of the protection of property rights. | | |
| ▲ | mindslight 2 hours ago | parent [-] | | > Courts tend not to take kindly to "hacking attempts" like this Yes, because law is generally defined in terms of intent, knowledge, and other human-level qualities. The attempt to "hack around" the specific prompt is irrelevant because the specific prompt is irrelevant, just like the specific weight of paper a contract is printed on is irrelevant - any contract could define them as relevant, but it's generally not beneficial to do so. > There's an "apparent authority" doctrine of agency law I'd encourage you to study Sure, but this still relies upon an LLM agent being held out as some kind of bona fide legal agent capable of executing some legally binding agreements. In this case there isn't even a counterparty who is capable of making that judgement whether the command is being run by someone with the apparent intent and authority to legally bind. So you're essentially saying there is no way for a user to run a software program without extending it the authority to form legal contracts on your behalf. I'd call this a preposterous attempt to "hack around" the utter lack of intent on the part of the person running the program. | | |
| ▲ | otterley an hour ago | parent [-] | | > the specific prompt is irrelevant The instruction prompt is absolutely relevant: it conveys to the agent the scope of its authority and the principal's intent, and would undoubtedly be used as evidence if a dispute arose over it. It's not different in kind from instructions you would give a human being. > this still relies upon an LLM agent being held out as some kind of bona fide legal agent capable of executing some legally binding agreements Which it can... > You're essentially saying there is no way to run a software program without extending it the legal authority to form legal contracts on your behalf. I'm not saying that at all. Agency law is very mature at this stage, and the test to determine that an actor is an agent and whether it acted within the scope of its authority is pretty clear. I'm not going to lay it all out here, so please go study it independently. I'm also not entirely sure what your angle here is: are you trying to say that an LLM-based agent cannot under any circumstances be treated as acting on its principal's behalf? Or are you just being argumentative and trying to find some angle to be "right"? | | |
| ▲ | mindslight an hour ago | parent [-] | | > The instruction prompt is absolutely relevant By "prompt" I was referring to the prompting of the user, by a program such as `sdkmanager --licenses`. If a user explicitly prompted an LLM agent to "accept all licenses", then I'd agree with you. > Which it can... It can be held out as a legal agent, sure. But in this case, is it? Is the coding agent somehow advertising itself to the sdkmanager program and/or Google that it has the authority to form legal contracts on behalf of its user? > I've counseled you already to study the law - go do that before we discuss this further While this is a reasonable ask for continuing the line of discussion, I'd say it's a lot of effort for a message board comment. So I won't be doing this, at least to the level of being able to intelligently respond here. Instead I would ask you what you would say are the minimum requirements to be able to have an LLM coding agent executing commands on your own machine, yet explicitly not having the authority to form legally binding contracts. (obviously I'm not asking this in the capacity of binding legal advice. and obviously one would still be responsible for any damage said process caused) |
|
|
|
|
|