▲ | fn-mote 8 hours ago | |||||||
The trifecta: > LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world The suggestion is to reduce risk by setting boundaries. Seems like security 101. | ||||||||
▲ | danenania 7 hours ago | parent | next [-] | |||||||
It is, but there's a direct tension here between security and capabilities. It's hard to do useful things with private data without opening up prompt injection holes. And there's a huge demand for this kind of product. Agents also typically work better when you combine all the relevant context as much as possible rather than splitting out and isolating context. See: https://cognition.ai/blog/dont-build-multi-agents — but this is at odds with isolating agents that read untrusted input. | ||||||||
| ||||||||
▲ | rvz 7 hours ago | parent | prev [-] | |||||||
It is security 101 as this is just setting basic access controls at the very least. The moment it has access to the internet, the risk is vastly increased. But with a very clever security researcher, it is possible to take over the entire machine with a single prompt injection attack reducing at least one of the requirements. |