| ▲ | ben_w 2 hours ago | |
> We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil". You have to stop the thing before the damage is done. There are many potential chains of events where the AI has caused enormous damage, and even many where it can destroy us, before the power to its own systems fails. At this point, with Grok in the Pentagon, just ask what the dumbest military equivalent to vibe-coding is, and imagine the US following that plan. Like, I dunno, invading Greenland or giving ICE direct control over tactical nukes or something. And that's just government use. Right now, I'm fairly confident LLMs aren't competent enough to help with anything world-ending unless they get used for war planning by major nuclear powers (oh hey look at the topic of discussion), but it's certainly plausible they'll get good enough at tool use to run someone else's protein folding software etc. to design custom pathogens, and I really hope all the DNA printing companies have good multi-layer defences (all the way from KYC or similar to analysing what they've been asked to make and content-filtering it) by that point. | ||