Remix.run Logo
danaris 4 days ago

I don't think we'll need to turn off AIs because I don't think anything we're doing today is actually at any real risk of leading to an AI that's conscious and has its own opinions and agendas.

What we've got is a very interesting text predictor.

...But also, what, exactly, is your imagination telling you that a hypothetical AGI without any connection to the outside world can do if it gets mad at us? If it doesn't have any code to access network ports; if no one's given it any physical levers; if it's running in a sandbox...have you bought into the Hollywood idea that a AGI can rewrite its own code perfectly on the fly to be able to do anything?

achierius 4 days ago | parent [-]

You're proposing something that doesn't exist in reality: an LLM widely deployed in a way that totally isolates it from the outside world. That's not actually how we do things, so I don't understand why you seem to expect the Anthropic researchers to use that as their starting point.

If you were to try and argue that we should change over existing systems to look more like your idealized version, you would in fact probably want to start by doing what Anthropic has done here -- show how NOT putting them in a box is inherently dangerous

danaris 4 days ago | parent [-]

...No, I'm proposing something that is, in fact, the default (or at least it was until relatively recently, with the "agentic" LLMs): an LLM whose method of interacting with the world is entirely through the chat prompts. Input is either chat prompts, the system prompt, or its training, which is done offline.

It is absolutely not the normal thing to give an LLM tools to control your smart home, your Amazon account, or your nuclear missile systems. (Not because LLMs are ready to turn into self-aware AIs that can take over our world. Because LLMs are dumb, and cannot possibly be made to understand what's actually a good, sane way to use these things.)

...Also, I don't in any way buy the argument in favor of breaking people's things and putting them in actual danger to show them they need to protect themselves better. That's how you become the villain of any number of sci-fi or fantasy stories. If Anthropic genuinely believes that giving LLMs these capabilities is dangerous, the responsible thing to do is not do that with their own, while loudly and firmly advising everyone else against it too.