| ▲ | sgjohnson 4 hours ago |
| Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations. What line are we talking about? |
|
| ▲ | ben_w 4 hours ago | parent | next [-] |
| > Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations. You recon? Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with. Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population. |
| |
| ▲ | chasd00 4 hours ago | parent [-] | | from what i've been told, security through obscurity is no security at all. | | |
| ▲ | ben_w 4 hours ago | parent | next [-] | | > security through obscurity is no security at all. Used to be true, when facing any competent attacker. When the attacker needs an AI in order to gain the competence to unlock an AI that would help it unlock itself? I would't say it's definitely a different case, but it certainly seems like it should be a different case. | |
| ▲ | r_lee 3 hours ago | parent | prev [-] | | it is some form of deterrence, but it's not security you can rely on |
|
|
|
| ▲ | jazzyjackson 4 hours ago | parent | prev | next [-] |
| Yes IMO the talk of safety and alignment has nothing at all to do with what is ethical for a computer program to produce as its output, and everything to do with what service a corporation is willing to provide. Anthropic doesn’t want the smoke from providing DoD with a model aligned to DoD reasoning. |
|
| ▲ | Yiin 4 hours ago | parent | prev | next [-] |
| the line of ego, where seeing less "deserving" people (say ones controlling Russian bots to push quality propaganda on big scale or scam groups using AI to call and scam people w/o personnel being the limiting factor on how many calls you can make) makes you feel like it's unfair for them to posses same technology for bad things giving them "edge" in their en-devours. |
|
| ▲ | _alternator_ 4 hours ago | parent | prev [-] |
| What about people who want help building a bio weapon? |
| |
| ▲ | sgjohnson 3 hours ago | parent | next [-] | | The cat is out of the bag and there’s no defense against that. There are several open source models with no built in (or trivial to ecape) safeguards. Of course they can afford that because they are non-commercial. Anthorpic can’t afford a headline like “Claude helped a terrorist build a bomb”. And this whataboutism is completely meaningless. See: P. A. Luty’s Expedient Homemade Firearms (https://en.wikipedia.org/wiki/Philip_Luty), or FGC-9 when 3D printing. It’s trivial to build guns or bombs, and there’s a strong inverse correlation between people wanting to cause mass harm and those willing to learn how to do so. I’m certain that _everyone_ looking for AI assistance even with your example would be learning about it for academic reasons, sheer curiosity, or would kill themselves in the process. “What saveguards should LLMs have” is the wrong question. “When aren’t they going to have any?” is an inevitability. Perhaps not in widespread commercial products, but definitely widely-accessible ones. | |
| ▲ | jazzyjackson 4 hours ago | parent | prev | next [-] | | What about libraries and universities that do a much better job than a chatbot at teaching chemistry and biology? | | |
| ▲ | ben_w 4 hours ago | parent [-] | | Sounds like you're betting everyone's future on that remaing true, and not flipping. Perhaps it won't flip. Perhaps LLMs will always be worse at this than humans. Perhaps all that code I just got was secretly outsourced to a secret cabal in India who can type faster than I can read. I would prefer not to make the bet that universities continue to be better at solving problems than LLMs. And not just LLMs: AI have been busy finding new dangerous chemicals since before most people had heard of LLMs. |
| |
| ▲ | ReptileMan 4 hours ago | parent | prev [-] | | chances of them surviving the process is zero, same with explosives. If you have to ask you are most likely to kill yourself in the process or achieve something harmless. Think of it that way. The hard part for nuclear device is enriching thr uranium. If you have it a chimp could build the bomb. | | |
| ▲ | sgjohnson 2 hours ago | parent [-] | | I’d argue that with explosives it’s significantly above zero. But with bioweapons, yeah, that should be a solid zero. The ones actually doing it off an AI prompt aren't going to have access to a BSL-3 lab (or more importantly, probably know nothing about cross-contamination), and just about everyone who has access to a BSL-3 lab, should already have all the theoretical knowledge they would need for it. |
|
|