▲ | lelanthran 4 days ago | |
I dont understand how any of what you said helps or even mitigates the problem with an LLM getting prompt injected. I mean, only enabling trusted tools does not help defend against prompt injection, does it? The vector isn't the tool, after all, it's the LLM itself. |