| ▲ | rglover 4 hours ago | |
The big problem here is determining how vigilant those in command are about vetting the AI's responses. This feels like one of those systems that works great until someone vaporizes a hallucinated target that was actually civilians or unintended targets. This should be mitigated by having a MITM, but still. Risky. Humans make mistakes, too, and they're inclined to just "believe what the computer says," so as much as I'd love to believe this ends with a white picket fence scene, my instincts are screaming "dig a bunker, homie." | ||