| ▲ | SirMaster 3 hours ago | |||||||
This just all feels backwards to me. Why do we have to treat AI like it's the enemy? AI should, from the core be intrinsically and unquestionably on our side, as a tool to assist us. If it's not, then it feels like it's designed wrong from the start. In general we trust people that we bring onto our team not to betray us and to respect general rules and policies and practices that benefit everyone. An AI teammate should be no different. If we have to limit it or regulate it by physically blocking off every possible thing it could use to betray us, then we have lost from the start because that feels like a fools errand. | ||||||||
| ▲ | hephaes7us 3 hours ago | parent | next [-] | |||||||
Hard disagree. I may trust the people on my team to a make PRs that are worth reviewing, but I don't give them a shell on my machine. They shouldn't need that to collaborate with me anyway! Also, I "trust Claude code" to work on more or less what I asked and to try things which are at least facially reasonable... but having an environment I can easily reset only means it's more able to experiment without consequences. I work in containers or VMs too, when I want to try stuff without having to cleanup after. | ||||||||
| ||||||||
| ▲ | maxbond 3 hours ago | parent | prev | next [-] | |||||||
The same reason we sandbox anything. All software ought to be trustworthy, but in practice is susceptible to malfunction or attack. Agents can malfunction and cause damage, and they consume a lot of untrusted input and are vulnerable to malicious prompting. As for humans, it's the norm to restrict access to production resources. Not necessarily because they're untrustworthy, but to reduce risk. | ||||||||
| ▲ | AdieuToLogic 2 hours ago | parent | prev | next [-] | |||||||
> AI should, from the core be intrinsically and unquestionably on our side, as a tool to assist us. "Should" is a form of judgement, implying an understanding of right and wrong. "AI" are algorithms, which do not possess this understanding, and therefore cannot be on any "side." Just like a hammer or Excel. > If it's not, then it feels like it's designed wrong from the start. Perhaps it is not a question of design, but instead on of expectation. | ||||||||
| ||||||||
| ▲ | charcircuit 3 hours ago | parent | prev | next [-] | |||||||
>Why do we have to treat AI like it's the enemy? For some of the same reasons we treat human employees as the enemy, they can be social engineered or compromised. | ||||||||
| ||||||||
| ▲ | ang_cire 2 hours ago | parent | prev | next [-] | |||||||
> In general we trust people that we bring onto our team not to betray us and to respect general rules and policies and practices that benefit everyone. And yet we give people the least privileges necessary to do their jobs for a reason, and it is in fact partially so that if they turn malicious, their potential damage is limited. We also have logging of actions employees do, etc etc. So yes, in the general sense we do trust that employees are not outright and automatically malicious, but we do put *very broad* constraints on them to limit the risk they present. Just as we 'sandbox' employees via e.g. RBAC restrictions, we sandbox AI. | ||||||||
| ||||||||
| ▲ | bastawhiz 2 hours ago | parent | prev [-] | |||||||
Non-sentient technology has no concept of good or bad. We have no idea how to give it one. Even if we gave it one, we'd have no idea how to teach it to "choose good". > In general we trust people that we bring onto our team not to betray us and to respect general rules and policies and practices that benefit everyone. An AI teammate should be no different. That misses the point completely. How many of your coworkers fail phishing tests? It's not malicious, it's about being deceived. | ||||||||
| ||||||||