| ▲ | nico 9 hours ago | |||||||
> Don't Do you mean "Don't give it more autonomy", or "Don't use it to access servers/dbs" ? I definitely want to be cautious, but I don't think I can go back to doing everything manually either | ||||||||
| ▲ | bigstrat2003 7 hours ago | parent | next [-] | |||||||
You have to choose between laziness or having systems that the LLM can't screw up. You can't have both. | ||||||||
| ▲ | hephaes7us 7 hours ago | parent | prev | next [-] | |||||||
You can have it write code that you review (with whatever level of caution you wish) and then run that on real data/infrastructure. You get a lot of leverage that way, but it's still better than letting AI use your keys and act with full autonomy on stuff of consequence. | ||||||||
| ▲ | dsr_ 8 hours ago | parent | prev | next [-] | |||||||
Why aren't you using the tools we already have: ansible, salt, chef, puppet, bcfg2, cfengine... every one of which was designed to do systems administration at scale. | ||||||||
| ||||||||
| ▲ | JoshTriplett 9 hours ago | parent | prev [-] | |||||||
I mean, both, but in this case I'm saying "don't use it to access any kind of production resource", with a side order of "don't rely on simple sandboxing (e.g. command patterns) to prevent things like database deletions". | ||||||||