Remix.run Logo
altruios 2 hours ago

with openclaw... you CAN fire an LLM. just replace it with another model, or soul.md/idenity.md.

It is a security issue. One that may be fixed -- like all security issues -- with enough time/attention/thought&care. Metrics for performance against this issue is how we tell if we are going to correct direction or not.

There is no 'perfect lock', there are just reasonable locks when it comes to security.

datsci_est_2015 an hour ago | parent | next [-]

How is it feasible to create sufficiently-encompassing metrics when the attack surface is the entire automaton’s interface with the outside world?

If you insist on the lock analogy, most locks are easily defeated, and the wisdom is mostly “spend about the equal amount on the lock as you spent on the thing you’re protecting” (at least with e.g. bikes). Other locks are meant to simply slow down attackers while something is being monitored (e.g. storage lockers). Other locks are simply a social contract.

I don’t think any of those considerations map neatly to the “LLM divulges secrets when prompted” space.

The better analogy might be the cryptography that ensures your virtual private server can only be accessed by you.

Edit: the reason “firing” matters is that humans behave more cautiously when there are serious consequences. Call me up when LLMs can act more cautiously when they know they’re about to be turned off, and maybe when they have the urge to procreate.

gleipnircode 2 hours ago | parent | prev [-]

Right, and that's exactly my question. Is a normal lock already enough to stop 99% of attackers? Or do you need the premium lock to get any real protection? This test uses Opus but what about the low budget locks?