▲ | tsimionescu 7 days ago | |
I don't think that's an accurate way to look at how LLMs work, there is no possible separation between data and control given the fundamental nature of LLMs. LLMs are essentially a plain text execution engine. Their fundamental design is to take arbitrary human language as input, and produce output that matches that input in some way. I think the most accurate way to look at them from a traditional security model perspective is as a script engine that can execute arbitrary text data. So, just like there is no realsitics hope of securely executing an attacker-controllers bash script, there is no realistic way to provide attacker controlled input to an LLM and still trust the output. In this sense, I completely agree with Google and Microsoft's decision for these discolosures: a bug report of the form "if I sneak a malicious prompt, the LLM returns a malicious answer" is as useless as a bug report in Bash that says that if you find a way to feed a malicious shell script to bash, it will execute it and produce malicious results. So, the real problem is if people are not treating LLM control files as arbitrary scripts, or if tools don't help you detect attempts at inserting malicious content in said scripts. After all, I can also control your code base if you let me insert malicious instructions in your Makefile. |