| ▲ | stavros 6 hours ago | |||||||||||||||||||||||||
No, that's not what's stopping SQL injection. What stops SQL injection is distinguishing between the parts of the statement that should be evaluated and the parts that should be merely used. There's no such capability with LLMs, therefore we can't stop prompt injections while allowing arbitrary input. | ||||||||||||||||||||||||||
| ▲ | dvt 6 hours ago | parent [-] | |||||||||||||||||||||||||
Everything in an LLM is "evaluated," so I'm not sure where the confusion comes from. We need to be careful when we use `eval()` and we need to be careful when we tell LLMs secrets. The Claude issue above is trivially solved by blocking the use of commands like curl or manually specifiying what domains are allowed (if we're okay with curl). | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||