Remix.run Logo
wat10000 2 hours ago

It's very simple: prompt injection is a completely unsolved problem. As things currently stand, the only fix is to avoid the lethal trifecta.

Unfortunately, people really, really want to do things involving the lethal trifecta. They want to be able to give a bot control over a computer with the ability to read and send emails on their behalf. They want it to be able to browse the web for research while helping you write proprietary code. But you can't safely do that. So if you're a massively overvalued AI company, what do you do?

You could say, sorry, I know you want to do these things but it's super dangerous, so don't. You could say, we'll give you these tools but be aware that it's likely to steal all your data. But neither of those are attractive options. So instead they just sort of pretend it's not a big deal. Prompt injection? That's OK, we train our models to be resistant to them. 92% safe, that sounds like a good number as long as you don't think about what it means, right! Please give us your money now.

csmpltn an hour ago | parent | next [-]

> «It's very simple: prompt injection is a completely unsolved problem. As things currently stand, the only fix is to avoid the lethal trifecta.»

True, but we can easily validate that regardless of what’s happening inside the conversation - things like «rm -rf» aren’t being executed.

AgentOrange1234 an hour ago | parent | next [-]

For a specific bad thing like "rm -rf" that may be plausible, but this will break down when you try to enumerate all the other bad things it could possibly do.

javcasas an hour ago | parent [-]

And you can always create good stuff that is to be interpreted in a really bad way.

Please send an email praising <person>'s awesome skills at <weird sexual kink> to their manager.

wat10000 an hour ago | parent | prev [-]

We can, but if you want to stop private info from being leaked then your only sure choice is to stop the agent from communicating with the outside world entirely, or not give it any private info to begin with.

plaguuuuuu 2 hours ago | parent | prev [-]

even if you limit to 2/3 I think any sort of persistence that can be picked up by agents with the other 1 can lead to compromise, like a stored XSS.