▲ | danaris 4 days ago | |||||||
I don't think we'll need to turn off AIs because I don't think anything we're doing today is actually at any real risk of leading to an AI that's conscious and has its own opinions and agendas. What we've got is a very interesting text predictor. ...But also, what, exactly, is your imagination telling you that a hypothetical AGI without any connection to the outside world can do if it gets mad at us? If it doesn't have any code to access network ports; if no one's given it any physical levers; if it's running in a sandbox...have you bought into the Hollywood idea that a AGI can rewrite its own code perfectly on the fly to be able to do anything? | ||||||||
▲ | achierius 4 days ago | parent [-] | |||||||
You're proposing something that doesn't exist in reality: an LLM widely deployed in a way that totally isolates it from the outside world. That's not actually how we do things, so I don't understand why you seem to expect the Anthropic researchers to use that as their starting point. If you were to try and argue that we should change over existing systems to look more like your idealized version, you would in fact probably want to start by doing what Anthropic has done here -- show how NOT putting them in a box is inherently dangerous | ||||||||
|