| ▲ | moralestapia 10 hours ago | |
If it was as easy as "knowing how to" someone would've already done it or at least attempted to.* Plenty of people know how to, 10,000s of researchers, perhaps you know someone who does. Did you know that your local veterinary shop has enough drugs to kill 100s of people? Why doesn't it happen? * It's not that easy. * There's a ton of regulation that is hard to circumvent, on purpose. * There's a gigantic deterrent called "spend the rest of your life behind bars" that people tend to avoid. An LLM, even the most advanced one, does not make any material change in any of these. You cannot bullshit your way into "uhh, I need Ebola samples for ... reasons". Unironically, your Sunday movie portraying a super-villain jeopardizing a city with his "home lab" full of flasks with colored liquids and BioHazard signs push way more people into becoming interested on this than having access to an LLM. *: Okay, like 5 people, and way before LLMs were a thing. This has been a thing for decades, we're fine. | ||
| ▲ | fwipsy 4 hours ago | parent [-] | |
CRISPR has not been a thing for decades. Biotechnology is advancing and AI is lowering the bar to use it. In 2018 a PhD student was able to synthesize an infectious horsepox virus: https://journals.plos.org/plosone/article?id=10.1371/journal... So far the overlap between people with bioengineering capabilities and murderous tendencies has been very low. As the technology becomes available to more people that overlap may increase. Even if it never comes within reach of one person, what about North Korea, or Iran? AI can be jailbroken. The LLM safeguards your argument relies on were put in place by the people you're criticizing for being too safety-conscious. Security through obscurity is no guarantee. | ||