| ▲ | sornaensis 3 hours ago | |
IMO the solution is the same as org security: fine grained permissions and tools. Models/Agents need a narrow set of things they are allowed to actually trigger, with real security policies, just like people. You can mitigate agent->agent triggers by not allowing direct prompting, but by feeding structured output of tool A into agent B. | ||