Remix.run Logo
gleipnircode 3 hours ago

OpenClaw user here. Genuinely curious to see if this works and how easy it turns out to be in practice.

One thing I'd love to hear opinions on: are there significant security differences between models like Opus and Sonnet when it comes to prompt injection resistance? Any experiences?

datsci_est_2015 3 hours ago | parent [-]

> One thing I'd love to hear opinions on: are there significant security differences between models like Opus and Sonnet when it comes to prompt injection resistance?

Is this a worthwhile question when it’s a fundamental security issue with LLMs? In meatspace, we fire Alice and Bob if they fail too many phishing training emails, because they’ve proven they’re a liability.

You can’t fire an LLM.

reassess_blind an hour ago | parent | next [-]

Yes, it’s worthwhile because the new models are being specifically trained and hardened against prompt injection attacks.

Much like how you wouldn’t immediately fire Alice, you’d train her and retest her, and see whether she had learned from her mistakes. Just don’t trust her with your sensitive data.

altruios 2 hours ago | parent | prev | next [-]

with openclaw... you CAN fire an LLM. just replace it with another model, or soul.md/idenity.md.

It is a security issue. One that may be fixed -- like all security issues -- with enough time/attention/thought&care. Metrics for performance against this issue is how we tell if we are going to correct direction or not.

There is no 'perfect lock', there are just reasonable locks when it comes to security.

datsci_est_2015 an hour ago | parent | next [-]

How is it feasible to create sufficiently-encompassing metrics when the attack surface is the entire automaton’s interface with the outside world?

If you insist on the lock analogy, most locks are easily defeated, and the wisdom is mostly “spend about the equal amount on the lock as you spent on the thing you’re protecting” (at least with e.g. bikes). Other locks are meant to simply slow down attackers while something is being monitored (e.g. storage lockers). Other locks are simply a social contract.

I don’t think any of those considerations map neatly to the “LLM divulges secrets when prompted” space.

The better analogy might be the cryptography that ensures your virtual private server can only be accessed by you.

Edit: the reason “firing” matters is that humans behave more cautiously when there are serious consequences. Call me up when LLMs can act more cautiously when they know they’re about to be turned off, and maybe when they have the urge to procreate.

gleipnircode 2 hours ago | parent | prev [-]

Right, and that's exactly my question. Is a normal lock already enough to stop 99% of attackers? Or do you need the premium lock to get any real protection? This test uses Opus but what about the low budget locks?

gleipnircode 2 hours ago | parent | prev [-]

It's a fundamental issue I agree.

But we don't stop using locks just because all locks can be picked. We still pick the better lock. Same here, especially when your agent has shell access and a wallet.

datsci_est_2015 2 hours ago | parent [-]

Is “lock” a fair analogy?

We stopped eating raw meat because some raw meat contained unpleasant pathogens. We now cook our meat for the most part, except sushi and tartare which are very carefully prepared.