▲ | tptacek 3 days ago | ||||||||||||||||||||||||||||||||||
You can simply disagree with me and we can hash it out. The "early career" thing is something Weakly herself has called out. I disagree with you that incident responders learn best by e.g. groveling through OpenSearch clusters themselves. In fact, I think the opposite thing is true: LLM agents do interesting things that humans don't think to do, and also can put more hypotheses on the table for incident responders to consider, faster, rather than the ordinary process of rabbitholing serially down individual hypothesis, 20-30 minutes at a time, never seeing the forest for the trees. I think the same thing is probably true of things like "dumping complicated iproute2 routing table configurations" or "inspecting current DNS state". I know it to be the case for LVM2 debugging†! Note that these are all active investigation steps, that involve the LLM agent actually doing stuff, but none of it is plausibly destructive. † Albeit tediously, with me shuttling things to and from an LLM rather than an agent doing things; this sucks, but we haven't solved the security issues yet. | |||||||||||||||||||||||||||||||||||
▲ | JoshTriplett 3 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
The only mention I see of early-career coming up in the article is "matches how I would teach an early career engineer the process of managing an incident". That isn't a claim that only early career engineers learn this way or benefit from working in this style. Your comment implied that the primary people who might want to work in the way proposed in this article are those early in their career. I would, indeed, disagree with that. Consider, by way of example, the classic problem of teaching someone to find information. If someone asks "how do I X" and you answer "by doing Y", they have learned one thing (and will hopefully retain it). If someone asks "how do I X" and you answer "here's the search I did to find the answer of Y", they have now learned two things, and one of them reinforces a critical skill they should be using throughout their career. I am not suggesting that incident response should be done entirely by hand, or that there's zero place for AI. AI is somewhat good at, for instance, looking at a huge amount of information at once and pointing towards things that might warrant a closer look. I'm nonetheless agreeing with the point that the human should be in the loop to a large degree. That also partly addresses the fundamental security problems of letting AI run commands in production, though in practice I do think it likely that people will run commands presented to them without careful checking. > none of it is plausibly destructive In theory, you could have a safelist of ways to gather information non-destructively. In practice, it would not surprise me at all if pople don't. I think it's very likely that many people will deploy AI tools in production and not solve any of the security issues, and incidents will result. I am all for the concept of having a giant dashboard that collects and presents any non-destructive information rapidly. That tool is useful for a human, too. (Along with presenting the commands that were used to obtain that information.) | |||||||||||||||||||||||||||||||||||
|