Remix.run Logo
codethief 3 hours ago

> https://github.com/flatpak/flatpak/security/advisories/GHSA-...

Just yesterday I was thinking about a related attack vector on AI agents: Many harnesses "sandbox" (at the application level) file reads/writes and shell commands by checking whether a given path has been whitelisted. However, I bet there are cases where the agent could simply create a symlink to somewhere else and thus trick the "sandbox" into thinking the agent is authorized?

neilv 10 minutes ago | parent | next [-]

I bet you're right. This is one kind of thing you need a meticulous programmer to do. But instead, I'd guess most AI-dogfooding engineering organizations in the near future will be taking a vibe-code-it-and-AI-red-team-it approach.

I don't trust sandbox claims from those companies, and only run CLI-ish code on workstation inside a full VM (not even a container).

TZubiri 7 minutes ago | parent | prev | next [-]

Any attempt to analyze a string that will be executed as a command is a fundamentally unsafe approach, presumably I can make an .sh file and run that and circumvent the mechanism? Off the top of my head. You could say that your analysis will be so deep that it can check the file scripts, it' can do so recursively through bash file chain s of any size, it's so smart in fact it can undecode base64 contents, and even if...

No, stop, if you do that, you have entered a rabbit hole, ignore the command, assume it can be malicious. Path constraints are already fundamentally solved with tech as old as UNIX users, you are 50 years behind in terms of security and should not be concerning yourself with cutting edge issues for that reason.

adi_kurian 2 hours ago | parent | prev | next [-]

Assume the worst.

2 hours ago | parent | prev [-]
[deleted]