| ▲ | siva7 4 hours ago | ||||||||||||||||
I have so many interesting problems on Ai, sandboxing isn't one of them. It's a pointless excercise yet disproportionately so many people love to to do this. Probably because sandboxing doesn't feel as magic as Agents itself and more like the old times of "traditional" software development. | |||||||||||||||||
| ▲ | hobofan 4 hours ago | parent | next [-] | ||||||||||||||||
It is a mostly pointless exercise if the goal is trying to contain negative impact of AI agents (e.g. OpenClaw). It is a very necessary building block for many common features that can be steered in a more deterministic way, e.g. "code interpreter" feature for data analysis or file creation like commonly seen in chat web UIs. | |||||||||||||||||
| ▲ | moezd 3 hours ago | parent | prev | next [-] | ||||||||||||||||
Believe it or not, once you start working for a regulated industry, it is all you would ever think of. There, people don't care if you are vibing with the latest libraries and harnesses or if it's magic, they care that the entire deployment is in some equivalent of a Faraday cage. Plus, many people just don't appreciate it when their agents go rm -rf / on them. | |||||||||||||||||
| ▲ | iterateoften 4 hours ago | parent | prev [-] | ||||||||||||||||
Yeah, idk I guess it’s interesting if you are an engineer looking for something to do, But like I see multiple sandbox for agents products a week. Way too saturated of a market | |||||||||||||||||
| |||||||||||||||||