| ▲ | caminanteblanco 8 hours ago | ||||||||||||||||||||||
Well that didn't take very long... | |||||||||||||||||||||||
| ▲ | heliumtera 8 hours ago | parent [-] | ||||||||||||||||||||||
It took no time at all. This exploit is intrinsic to every model in existence. The article quotes the hacker news announcement. People were already lamenting this vulnerability BEFORE the model being accessible. You could make a model that acknowledges it has receive unwanted instructions, in theory, you cannot prevent prompt injection. Now this is big because the exfiltration is mediated by an allowed endpoint (anthropic mediates exfiltration). It is simply sloppy as fuck, they took measures against people using other agents using Claude Code subscriptions for the sake of security and muh safety while being this fucking sloppy. Clown world. Just make so the client can only establish connections with the original account associated endpoints and keys on that isolated ephemeral environment and make this the default, opting out should be market as big time yolo mode. | |||||||||||||||||||||||
| |||||||||||||||||||||||