| ▲ | stackghost 4 hours ago | |
As in any complex system, failures only occur when all the holes in the metaphorical slices of Swiss cheese line up to create a path. Filling the hole in any of the layers traps the error and averts a failure. So, perhaps yes, it could have been solved that way. My personal beef in this particular instance is that we've seemingly decided to throw decades of advice in the form of "don't allow untrusted input to be executable" out the window. Like, say, having an LLM read github issues that other people can write. It's not like prompt injections and LLM jailbreaks are a new phenomenon. We've known about those problems about as long as we've known about LLMs themselves. | ||