▲ | simonw 3 days ago | |||||||||||||||||||||||||
I think the core of the whole problem is that if you have an LLM with access to tools and exposure to untrusted input, you should consider the author of that untrusted input to be have total control over the execution of those tools. MCP is just a widely agreed upon abstraction over hooking an LLM up to some tools. A significant potion of things people want to do with LLMs and with tools in general involve tasks where a malicious attacker taking control of those tools is a bad situation. Is that what you mean by context hygiene? That end users need to assume that anything bad in the context can trigger unwanted actions, just like you shouldn't blindly copy and paste terminal commands from a web page into your shell (cough, curl https://.../install.sh | sh) or random chunks of JavaScript into the Firefox devtools console on Facebook.com ? | ||||||||||||||||||||||||||
▲ | tptacek 3 days ago | parent [-] | |||||||||||||||||||||||||
On the first two paragraphs: we agree. (I just think that's both more obvious and less fundamental to the model than current writing on this suggests). On the latter two paragraphs: my point is that there's nothing fundamental to the concept of an agent that requires you to mix untrusted content with sensitive tool calls. You can confine untrusted content to its own context window, and confine sensitive tool calls to "sandboxed" context windows; you can feed raw context from both to a third context window to summarize or synthesize; etc. | ||||||||||||||||||||||||||
|