Remix.run Logo
roxolotl 5 hours ago

We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.

And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.

But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.

asah 4 hours ago | parent [-]

"hand over" is a misnomer - what actually happens is that there's an interaction with a machine and people either trust it too much, or forget that it's a machine (i.e. handed from one person to another and the "AI warning" label is accidentally or intentionally ripped off)