▲ | skaul 4 days ago | ||||||||||||||||
> I haven't seen a single credible technique from anyone that can distinguish content from instructions You specifically mean that it's ~impossible to distinguish between content and instructions ONCE it is fed to the model, right? I agree with that. I was talking about a prior step, at the browser level. At the point that the query is sent to the backend, the browser would be able to distinguish between web contents and user prompt. This is useful for checking user-alignment of the output of the reasoning model (keeping in mind that the moment you feed in untrusted text into a model all bets are off). We're actively thinking and working on this, so will have more to announce soon, but this discussion is useful! | |||||||||||||||||
▲ | simonw 4 days ago | parent [-] | ||||||||||||||||
Even if you know the source of the text before you feed it to the model you still need to solve the problem of how to send untrusted text from a user through a model without that untrusted text being able to trigger additional tool calls or actions. The most credible pattern I've seen for that comes from the DeepMind CaMeL paper - I would love to see a browser agent that robustly implemented those ideas: https://simonwillison.net/2025/Apr/11/camel/ | |||||||||||||||||
|