| ▲ | embedding-shape 2 hours ago | ||||||||||||||||||||||
I'm just guessing, but seems the people who write these agent CLIs haven't found a good heuristic for allowing/disallowing/asking the user about permissions for commands, so instead of trying to sit down and actually figure it out, someone had the bright idea to let the LLM also manage that allowing/disallowing themselves. How that ever made sense, will probably forever be lost on me. `chroot` is literally the first thing I used when I first installed a local agent, by intuition (later moved on to a container-wrapper), and now I'm reading about people who are giving these agents direct access to reply to their emails and more. | |||||||||||||||||||||||
| ▲ | valleyer an hour ago | parent [-] | ||||||||||||||||||||||
Here's OpenAI's docs page on how they sandbox Codex: https://developers.openai.com/codex/security/ Here's the macOS kernel-enforced sandbox profile that gets applied to processes spawned by the LLM: https://github.com/openai/codex/blob/main/codex-rs/core/src/... I think skepticism is healthy here, but there's no need to just guess. | |||||||||||||||||||||||
| |||||||||||||||||||||||