| ▲ | m-hodges 10 hours ago | |
See: A field guide to sandboxes for AI¹ on the threat models. > I want to be direct: containers are not a sufficient security boundary for hostile code. They can be hardened, and that matters. But they still share the host kernel. The failure modes I see most often are misconfiguration and kernel/runtime bugs — plus a third one that shows up in AI systems: policy leakage. | ||