| ▲ | lunar_mycroft 2 hours ago | |
Probably, but not necessarily. Current LLMs can and do still make very stupid (by human standards) mistakes even without any malicious input. Additionally: - As has been pointed out elsewhere in the thread, it can be difficult to separate out "prompt injection" from "marketing" in some cases. - Depending on what the vector for the prompt injection is, what model your OpenClaw instance uses, etc. it might not be easy or even possible to determine whether a given transfer was the result of prompt injection or just the bot making a stupid mistake. If the burden of proof is on the consumer to prove that it as prompt injection, this would leave many victims with no way to recover their funds. On the other hand, if banks are required to assume prompt injection unless there's evidence against it, I strongly suspect banks would respond by just banning the use of OpenClaw and similar software with their systems as part of their agreements with their customers. They might well end up doing that regardless. - Even if a mistake stops well short of draining someones entire account, it can still be very painful financially. | ||
| ▲ | skybrian 2 hours ago | parent [-] | |
I doubt it’s been settled for the particular case of prompt injection, but according to patio11, burden of proof is usually on the bank. | ||