| ▲ | fzeindl 4 hours ago |
| The principal security problem of LLMs is that there is no architectural boundary between data and control paths. But this combination of data and control into a single, flexible data stream is also the defining strength of a LLM, so it can’t be taken away without also taking away the benefits. |
|
| ▲ | andruby an hour ago | parent | next [-] |
| This was a problem with early telephone lines which was easy to exploit (see Woz & Jobs Blue Box). It got solved by separating the voice and control pane via SS7. Maybe LLMs need this separation as well |
| |
| ▲ | bcrosby95 18 minutes ago | parent [-] | | This is where the old line of "LLMs are just next token predictors" actually factors in. I don't know how you get a next token predictor that user input can't break out of. The answer is for the implementer to try to split what they can, and run pre/post validation. But I highly doubt it will ever be 100%, its fundamental to the technology. |
|
|
| ▲ | VikingCoder 2 hours ago | parent | prev | next [-] |
| The "S" in "LLM" is for "Security". |
|
| ▲ | notatoad 25 minutes ago | parent | prev | next [-] |
| As the article says: this doesn’t necessarily appear to be a problem in the LLM, it’s a problem in Claude code. Claude code seems to leave it up to the LLM to determine what messages came from who, but it doesn’t have to do that. There is a deterministic architectural boundary between data and control in Claude code, even if there isn’t in Claude. |
|
| ▲ | mt_ 3 hours ago | parent | prev | next [-] |
| Exactly like human input to output. |
| |
| ▲ | WarmWash 2 hours ago | parent | next [-] | | We just need to figure out the qualia of pain and suffering so we can properly bound desired and undesired behaviors. | | | |
| ▲ | codebje 3 hours ago | parent | prev [-] | | Well no, nothing like that, because customers and bosses are clearly different forms of interaction. | | |
| ▲ | vidarh 2 hours ago | parent | next [-] | | Just like that, in that that separation is internally enforced, by peoples interpretation and understanding, rather than externally enforced in ways that makes it impossible for you to, e.g. believe the e-mail from an unknown address that claims to be from your boss, or be talked into bypassing rules for a customer that is very convincing. | | |
| ▲ | codebje 2 hours ago | parent [-] | | Being fooled into thinking data is instruction isn't the same as being unable to distinguish them in the first place, and being coerced or convinced to bypass rules that are still known to be rules I think remains uniquely human. | | |
| ▲ | TeMPOraL 2 hours ago | parent | next [-] | | > and being coerced or convinced to bypass rules that are still known to be rules I think remains uniquely human. This is literally what "prompt injection" is. The sooner people understand this, the sooner they'll stop wasting time trying to fix a "bug" that's actually the flip side of the very reason they're using LLMs in the first place. | |
| ▲ | vidarh 2 hours ago | parent | prev | next [-] | | This makes no sense to me. Being fooled into thinking data is instruction is exactly evidence of an inability to reliably distinguish them. And being coerced or convinced to bypass rules is exactly what prompt injection is, and very much not uniquely human any more. | | |
| ▲ | kg 2 hours ago | parent [-] | | The email from your boss and the email from a sender masquerading as your boss are both coming through the same channel in the same format with the same presentation, which is why the attack works. Unless you were both faceblind and bad at recognizing voices, the same attack wouldn't work in-person, you'd know the attacker wasn't your boss. Many defense mechanisms used in corporate email environments are built around making sure the email from your boss looks meaningfully different in order to establish that data vs instruction separation. (There are social engineering attacks that would work in-person though, but I don't think it's right to equate those to LLM attacks.) Prompt injection is just exploiting the lack of separation, it's not 'coercion' or 'convincing'. Though you could argue that things like jailbreaking are closer to coercion, I'm not convinced that a statistical token predictor can be coerced to do anything. | | |
| ▲ | vidarh an hour ago | parent [-] | | > The email from your boss and the email from a sender masquerading as your boss are both coming through the same channel in the same format with the same presentation, which is why the attack works. Yes, that is exactly the point. > Unless you were both faceblind and bad at recognizing voices, the same attack wouldn't work in-person, you'd know the attacker wasn't your boss. Irrelevant, as other attacks works then. E.g. it is never a given that your bosses instructions are consistent with the terms of your employment, for example. > Prompt injection is just exploiting the lack of separation, it's not 'coercion' or 'convincing'. Though you could argue that things like jailbreaking are closer to coercion, I'm not convinced that a statistical token predictor can be coerced to do anything. It is very much "convincing", yes. The ability to convince an LLM is what creates the effective lack of separation. Without that, just using "magic" values and a system prompt telling it to ignore everything inside would create separation. But because text anywhere in context can convince the LLM to disregard previous rules, there is no separation. |
|
| |
| ▲ | PunchyHamster an hour ago | parent | prev [-] | | the second leads to first, in case you still don't realize |
|
| |
| ▲ | orbital-decay an hour ago | parent | prev | next [-] | | These are different "agents" in LLM terms, they have separate contexts and separate training | |
| ▲ | j45 3 hours ago | parent | prev | next [-] | | There can be outliers, maybe not as frequent :) | |
| ▲ | jodrellblank 2 hours ago | parent | prev [-] | | If they were 'clearly different' we would not have the concept of the CEO fraud attack: https://www.barclayscorporate.com/insights/fraud-protection/... That's an attack because trusted and untrusted input goes through the same human brain input pathways, which can't always tell them apart. | | |
| ▲ | runarberg 2 hours ago | parent [-] | | Your parent made no claim about all swans being white. So finding a black swan has no effect on their argument. |
|
|
|
|
| ▲ | groby_b 17 minutes ago | parent | prev | next [-] |
| "The principal security problem of von Neumann architecture is that there is no architectural boundary between data and control paths" We've chosen to travel that road a long time ago, because the price of admission seemed worth it. |
|
| ▲ | clickety_clack 3 hours ago | parent | prev [-] |
| It’s easier not to have that separation, just like it was easier not to separate them before LLMs. This is architectural stuff that just hasn’t been figured out yet. |
| |
| ▲ | fzeindl 3 hours ago | parent [-] | | No. With databases there exists a clear boundary, the query planner, which accepts well defined input: the SQL-grammar that separates data (fields, literals) from control (keywords). There is no such boundary within an LLM. There might even be, since LLMs seem to form adhoc-programs, but we have no way of proving or seeing it. | | |
| ▲ | TeMPOraL 2 hours ago | parent [-] | | There cannot be, without compromising the general-purpose nature of LLMs. This includes its ability to work with natural languages, which as one should note, has no such boundary either. Nor does the actual physical reality we inhabit. |
|
|