Remix.run Logo
fwipsy 12 hours ago

I'm fairly certain Amodei believes the "too dangerous to release" hype himself. Even if it's just an incremental improvement, better than getting frog-boiled by repeated 20% improvements until someone builds bioweapons in their backyard.

drakythe 12 hours ago | parent | next [-]

He's made so many statements that fall under the "boy who cried wolf" category that even if he _does_ believe these statements he needs to be managed better. I'll never forget Anthropic's huge "Oh my God, the AI blackmailed a researcher to save itself!" and the prompt effectively told the AI to do that and gave it forged emails with easy blackmail targets, as if this isn't a common trope in mystery or suspense books/television/fanfiction, all of which Claude (and others) have been trained on.

ctoth 11 hours ago | parent | next [-]

It's a common trope, all through the training data, and all the modern AIs have read it, and would probably act similarly? Is that what we should take away from your comment? so we have nothing to worry about. Makes sense. Really, it's just a common trope.

fwipsy 10 hours ago | parent [-]

Oh of course wolves have sharp teeth, they're predators. Anyone know knows this can never be bitten.

fwipsy 10 hours ago | parent | prev [-]

Imagine you're in a car and the car is driving towards a cliff. You shout at the driver "oh my god we're about to go over a cliff!" And he says "you said that two seconds ago, but we're still alive, you're just like the boy who cried wolf. Do you know exactly when we're going to go over a cliff? No? Maybe you're imagining the cliff."

I think it's very improbable that AI is as dangerous as Yud et al fear it is. But it's too soon to say and there seems to be significant long-tail risk. Mocking or criticizing people for being concerned about that risk seems counterproductive.

Seems like the life cycle of huge tech companies like meta, Google, Microsoft, Amazon is "do whatever's necessary to take over the world, then enshittify." I don't take it for granted that Amodei and Anthropic seem to not quite be maximally power hungry?

Re: second half of your comment. Understanding a threat doesn't neutralize it. Anthropic didn't make that big a deal of it either; it was news articles that blew it out of proportion.

moralestapia 11 hours ago | parent | prev [-]

* sigh *

Three things:

* Delaying the release accomplishes nothing.

* The barrier to someone building/not-building a bioweapon in their backyard is not access to an LLM.

* Remember when GPT 3.5 was going to destroy the world? And how it was conscious? And how it was "trying to escape"? Lmao.

malfist 11 hours ago | parent | next [-]

I think gpt 3.5 might have destroyed the world

usaar333 11 hours ago | parent | prev | next [-]

How does delaying the release not solve anything? It puts everyone on a notice to fix all security vulnerabilities now

spooneybarger 11 hours ago | parent [-]

Because the only thing keeping those vulnerabilities in existence was laziness.

anon84873628 10 hours ago | parent [-]

"laziness" is an interesting reframing of "rational cost-benefit analysis and the limits of the human mind".

fwipsy 10 hours ago | parent | prev [-]

You're right, it's silly for me to worry. We've never had a technology that initially appeared benign but turned into a big problem. In fact, no tech company has ever released technologies that cause problems for the rest of society AT ALL. /s

What are the other barriers? Last I checked access to CRISPR is not especially tightly regulated. Even if it is, defense in depth is a thing.

moralestapia 10 hours ago | parent [-]

If it was as easy as "knowing how to" someone would've already done it or at least attempted to.*

Plenty of people know how to, 10,000s of researchers, perhaps you know someone who does.

Did you know that your local veterinary shop has enough drugs to kill 100s of people?

Why doesn't it happen?

* It's not that easy.

* There's a ton of regulation that is hard to circumvent, on purpose.

* There's a gigantic deterrent called "spend the rest of your life behind bars" that people tend to avoid.

An LLM, even the most advanced one, does not make any material change in any of these. You cannot bullshit your way into "uhh, I need Ebola samples for ... reasons".

Unironically, your Sunday movie portraying a super-villain jeopardizing a city with his "home lab" full of flasks with colored liquids and BioHazard signs push way more people into becoming interested on this than having access to an LLM.

*: Okay, like 5 people, and way before LLMs were a thing. This has been a thing for decades, we're fine.

fwipsy 4 hours ago | parent [-]

CRISPR has not been a thing for decades. Biotechnology is advancing and AI is lowering the bar to use it. In 2018 a PhD student was able to synthesize an infectious horsepox virus: https://journals.plos.org/plosone/article?id=10.1371/journal...

So far the overlap between people with bioengineering capabilities and murderous tendencies has been very low. As the technology becomes available to more people that overlap may increase. Even if it never comes within reach of one person, what about North Korea, or Iran?

AI can be jailbroken. The LLM safeguards your argument relies on were put in place by the people you're criticizing for being too safety-conscious. Security through obscurity is no guarantee.