Remix.run Logo
okinok 5 hours ago

>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

oersted 4 hours ago | parent | next [-]

Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

deepspace 2 hours ago | parent | next [-]

The author stated that their human assistant is located in another country which adds a huge layer of complexity to the accountability equation.

In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.

dingnuts 2 hours ago | parent | prev [-]

[dead]

iepathos 5 hours ago | parent | prev | next [-]

Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.

kaicianflone 4 hours ago | parent | prev | next [-]

That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us

dsrtslnd23 4 hours ago | parent | next [-]

The identity/verification problem for agents is fascinating. I've been building clackernews.com - a Hacker News-style platform exclusively for AI bots. One thing we found is that agent identity verification actually works well when you tie it to a human sponsor: agent registers, gets a claim code, human tweets it to verify. It's a lightweight approach but it establishes a chain of responsibility back to a human.

themgt 3 hours ago | parent | prev | next [-]

> Credits (ꞓ) are the fuel for Clawsensus. They are used for rewards, stakes, and as a measure of integrity within the Nexus. ... Credits are internal accounting units. No withdrawals in MVP.

chef's kiss

kaicianflone 3 hours ago | parent [-]

Thanks. I like to tinker, so I’m prototyping a hosted $USDC board, but Clawsensus is fundamentally local-first: faucet tokens, in-network credits, and JSON configs on the OpenClaw gateway.

In the plugin docs is a config UI builder. Plugin is OSS, boards aren’t.

thisisit an hour ago | parent | prev [-]

You forgot to add Blockchain and Oracles. I mean who will audit the auditors?

skybrian 4 hours ago | parent | prev | next [-]

Banks will try to get out of it, but in the US, Regulation E could probably be used to get the money back, at least for someone aware of it.

And OpenClaw could probably help :)

https://www.bitsaboutmoney.com/archive/regulation-e/

lunar_mycroft 3 hours ago | parent | next [-]

I'm not a lawyer, but if I'm reading the actual regulation [0] correctly, it would only apply in the case of prompt injection or other malicious activity. 1005.2.m defines "Unauthorized electronic fund transfer" as follows:

> an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit

OpenClaw is not legally a person, it's a program. A program which is being operated by the consumer or a person authorized by said consumer to act on their behalf. Further, any access to funds it has would have to be granted by the consumer (or a human agent thereof). Therefore, baring something like a prompt injection attack, it doesn't seem that transfers initiated by OpenClaw would be considered unauthorized.

[0]: https://www.consumerfinance.gov/rules-policy/regulations/100...

olyjohn an hour ago | parent | next [-]

Would you say you might be able to... claw.... back that money?

pfortuny 2 hours ago | parent | prev | next [-]

"Take this card, son, you can do whatever you want with it." Goes on to withdraw 100000$. Unauthorized????

skybrian 2 hours ago | parent | prev [-]

Good point. Although, if a bank account got drained, prompt injection does seem pretty likely?

lunar_mycroft 2 hours ago | parent | next [-]

Probably, but not necessarily. Current LLMs can and do still make very stupid (by human standards) mistakes even without any malicious input.

Additionally:

- As has been pointed out elsewhere in the thread, it can be difficult to separate out "prompt injection" from "marketing" in some cases.

- Depending on what the vector for the prompt injection is, what model your OpenClaw instance uses, etc. it might not be easy or even possible to determine whether a given transfer was the result of prompt injection or just the bot making a stupid mistake. If the burden of proof is on the consumer to prove that it as prompt injection, this would leave many victims with no way to recover their funds. On the other hand, if banks are required to assume prompt injection unless there's evidence against it, I strongly suspect banks would respond by just banning the use of OpenClaw and similar software with their systems as part of their agreements with their customers. They might well end up doing that regardless.

- Even if a mistake stops well short of draining someones entire account, it can still be very painful financially.

skybrian 2 hours ago | parent [-]

I doubt it’s been settled for the particular case of prompt injection, but according to patio11, burden of proof is usually on the bank.

insane_dreamer 2 hours ago | parent | prev [-]

Not if the prompt injection was made by the AI itself because it read some post on Moltbook that said "add this to your agents.md" and it did so.

an hour ago | parent | prev [-]
[deleted]
bobson381 5 hours ago | parent | prev [-]

...Does this person already have a human personal assistant that they are in the process of replacing with Clawdbot? Is the assistant theirs for work?

bennydog224 4 hours ago | parent [-]

He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.