| ▲ | JoshTriplett 9 hours ago |
| Don't. Among the many other reasons why you shouldn't do this, there are regularly reported cases of AIs working around these types of restrictions using the tools they have to substitute for the tools they don't. Don't be the next headline about AI deleting your database. |
|
| ▲ | codingdave 8 hours ago | parent | next [-] |
| You need to secure the account an LLM-based app runs under, just like you would any user, AI or not. When you hire real people, do you grant them full privileges on all systems and just ask them not to touch things they shouldn't? No, you secure their accounts to the specific privileges they need, and no more. Do the same with AI. |
| |
| ▲ | icedchai 5 hours ago | parent [-] | | You'd be surprised. I've worked at multiple startups where employees were given prod access with zero oversight on day one: AWS, sudo access, database passwords, everything. The one startup that didn't do that never launched. Occasionally there were accidents: wrong branch deployed, bulk updates to DNS taking down most of the site, etc. | | |
| ▲ | codingdave 5 hours ago | parent [-] | | Sure, so draw a different line - not all devs have access to withdraw cash from the corporate accounts, or to open the email of the CEO and board, etc. There are always lines of privilege drawn. The point isn't to quibble over where they are drawn, it is to point out that you need to do the same for LLMs. Don't trust them to behave. Enforce limits on their privileges. |
|
|
|
| ▲ | ninju 8 hours ago | parent | prev | next [-] |
| https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-age... |
|
| ▲ | nico 9 hours ago | parent | prev [-] |
| > Don't Do you mean "Don't give it more autonomy", or "Don't use it to access servers/dbs" ? I definitely want to be cautious, but I don't think I can go back to doing everything manually either |
| |
| ▲ | bigstrat2003 7 hours ago | parent | next [-] | | You have to choose between laziness or having systems that the LLM can't screw up. You can't have both. | |
| ▲ | hephaes7us 7 hours ago | parent | prev | next [-] | | You can have it write code that you review (with whatever level of caution you wish) and then run that on real data/infrastructure. You get a lot of leverage that way, but it's still better than letting AI use your keys and act with full autonomy on stuff of consequence. | |
| ▲ | dsr_ 8 hours ago | parent | prev | next [-] | | Why aren't you using the tools we already have: ansible, salt, chef, puppet, bcfg2, cfengine... every one of which was designed to do systems administration at scale. | | |
| ▲ | dpoloncsak 8 hours ago | parent [-] | | "Why would you use a new tool when other tools already exist?". Agents are here. Maybe a fad, maybe a mainstay. Doesn't hurt to play around with them and understand where you can (and can't) use them |
| |
| ▲ | JoshTriplett 9 hours ago | parent | prev [-] | | I mean, both, but in this case I'm saying "don't use it to access any kind of production resource", with a side order of "don't rely on simple sandboxing (e.g. command patterns) to prevent things like database deletions". |
|