| ▲ | heliumtera 8 hours ago | |||||||
It took no time at all. This exploit is intrinsic to every model in existence. The article quotes the hacker news announcement. People were already lamenting this vulnerability BEFORE the model being accessible. You could make a model that acknowledges it has receive unwanted instructions, in theory, you cannot prevent prompt injection. Now this is big because the exfiltration is mediated by an allowed endpoint (anthropic mediates exfiltration). It is simply sloppy as fuck, they took measures against people using other agents using Claude Code subscriptions for the sake of security and muh safety while being this fucking sloppy. Clown world. Just make so the client can only establish connections with the original account associated endpoints and keys on that isolated ephemeral environment and make this the default, opting out should be market as big time yolo mode. | ||||||||
| ▲ | wcoenen 7 hours ago | parent | next [-] | |||||||
> you cannot prevent prompt injection I wonder if might be possible by introducing a concept of "authority". Tokens are mapped to vectors in an embedding space, so one of the dimensions of that space could be reserved to represent authority. For the system prompt, the authority value could be clamped to maximum (+1). For text directly from the user or files with important instructions, the authority value could be clamped to a slightly lower value, or maybe 0 because the model needs to be balance being helpful against refusing requests from a malicious user. For random untrusted text (e.g. downloaded from the internet by the agent), it would be set to the minimum value (-1). The model could then be trained to fully respect or completely ignore instructions, based on the "authority" of the text. Presumably it could learn to do the right thing with enough examples. | ||||||||
| ||||||||
| ▲ | caminanteblanco 7 hours ago | parent | prev [-] | |||||||
Well I do think that the main exacerbating factor in this case was the lack of proper permissions handling around that file-transfer endpoint. I know that if the user goes into YOLO mode, prompt injection becomes a statistics game, but this locked down environment doesn't have that excuse. | ||||||||