| ▲ | xmprt 6 hours ago | |
> Fundamentally, with LLMs you can't separate instructions from data, which is the root cause for 99% of vulnerabilities This isn't a problem that's fundamental to LLMs. Most security vulnerabilities like ACE, XSS, buffer overflows, SQL injection, etc., are all linked to the same root cause that code and data are both stored in RAM. We have found ways to mitigate these types of issues for regular code, so I think it's a matter of time before we solve this for LLMs. That said, I agree it's an extremely critical error and I'm surprised that we're going full steam ahead without solving this. | ||
| ▲ | candiddevmike 5 hours ago | parent | next [-] | |
We fixed these in determinate contexts only for the most part. SQL injection specifically requires the use of parametrized values typically. Frontend frameworks don't render random strings as HTML unless it's specifically marked as trusted. I don't see us solving LLM vulnerabilities without severely crippling LLM performance/capabilities. | ||
| ▲ | simonw 4 hours ago | parent | prev | next [-] | |
> We have found ways to mitigate these types of issues for regular code, so I think it's a matter of time before we solve this for LLMs. We've been talking about prompt injection for over three years now. Right from the start the obvious fix has been to separate data from instructions (as seen in parameterized SQL queries etc)... and nobody has cracked a way to actually do that yet. | ||
| ▲ | ArcHound 5 hours ago | parent | prev [-] | |
Yes, plenty of other injections exist, I meant to include those. What I meant, that at the end of the day, the instructions for LLMs will still contain untrusted data and we can't separate the two. | ||