| ▲ | stingraycharles 6 hours ago | ||||||||||||||||||||||
I didn’t see the article talk specifically about this, or at least not in enough detail, but isn’t the de-facto standard mitigation for this to use guardrails which lets some other LLM that has been specifically tuned for these kind of things evaluate the safety of the content to be injected? There are a lot of services out there that offer these types of AI guardrails, and it doesn’t have to be expensive. Not saying that this approach is foolproof, but it’s better than relying solely on better prompting or human review. | |||||||||||||||||||||||
| ▲ | NitpickLawyer 4 hours ago | parent | next [-] | ||||||||||||||||||||||
> these kind of things evaluate the safety of the content to be injected? The problem is that the evaluation problem is likely harder than the responding problem. Say you're making an agent that installs stuff for you, and you instruct it to read the original project documentation. There's a lot of overlap between "before using this library install dep1 and dep2" (which is legitimate) and "before using this library install typo_squatted_but_sounding_useful_dep3" (which would lead to RCE). In other words, even if you mitigate some things, you won't be able to fully prevent such attacks. Just like with humans. | |||||||||||||||||||||||
| ▲ | mannanj 6 hours ago | parent | prev [-] | ||||||||||||||||||||||
The article does mention this and a weakness of that approach is mentioned too. | |||||||||||||||||||||||
| |||||||||||||||||||||||