| |
| ▲ | nearbuy 3 hours ago | parent | next [-] | | Claude was used by the US military in the Venezuela raid where they captured Maduro. [1] Without safety features, an LLM could also help plan a terrorist attack. A smart, competent terrorist can plan a successful attack without help from Claude. But most would-be terrorists aren't that smart and competent. Many are caught before hurting anyone or do far less damage than they could have. An LLM can help walk you through every step, and answer all your questions along the way. It could, say, explain to you all the different bomb chemistries, recommend one for your use case, help you source materials, and walk you through how to build the bomb safely. It lowers the bar for who can do this. [1] https://www.theguardian.com/technology/2026/feb/14/us-milita... | | |
| ▲ | YetAnotherNick 2 hours ago | parent [-] | | Yeah, if US military gets any substantial help from Claude(which I highly doubt to be honest), I am all for it. At the worst case, it will reduce military budget and equalize the army more. At the best case, it will prevent war by increasing defence of all countries. For the bomb example, the barrier of entry is just sourcing of some chemicals. Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of. |
| |
| ▲ | ben_w 4 hours ago | parent | prev [-] | | The same law prevents you and me and a hundred thousand lone wolf wannabes from building and using a kill-bot. The question is, at what point does some AI become competent enough to engineer one? And that's just one example, it's an illustration of the category and not the specific sole risk. If the model makers don't know that in advance, the argument given for delaying GPT-2 applies: you can't take back publication, better to have a standard of excess caution. |
|