Remix.run Logo
bonsai_spool 3 hours ago

> Just reading this, the inevitable scaremongering about biological weapons comes up.

It's very easy to learn more about this if it's seriously a question you have.

I don't quite follow why you think that you are so much more thoughtful than Anthropic/OpenAI/Google such that you agree that LLMs can't autonomously create very bad things but—in this area that is not your domain of expertise—you disagree and insist that LLMs cannot create damaging things autonomously in biology.

I will be charitable and reframe your question for you: is outputting a sequence of tokens, let's call them characters, by LLM dangerous? Clearly not, we have to figure out what interpreter is being used, download runtimes etc.

Is outputting a sequence of tokens, let's call them DNA bases, by LLM dangerous? What if we call them RNA bases? Amino acids? What if we're able to send our token output to a machine that automatically synthesizes the relevant molecules?

torginus 2 hours ago | parent [-]

>It's very easy to learn more about this if it's seriously a question you have.

No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now.

Despite that, most engineers were of the opinion, that these models were kinda mid at coding, up until recently, despite these models far outperforming humans in stuff like competitive programming.

Yet despite that, we've seen claims going back to GPT4 of a DANGEROUS SUPERINTELLIGENCE.

I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology.

My guess is that this model is kinda o1-ish level maybe when it comes to biology? If biology is analogous to CS, it has a LONG way to go before the median researcher finds it particularly useful, let alone dangerous.

bonsai_spool 2 hours ago | parent [-]

>>It's very easy to learn more about this if it's seriously a question you have.

>No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now

This reads as defensive. The thing that is easy to learn is 'why are biology ai LLMs dangerous chatgpt claude'. I have never googled this before, so I'll do this with the reader, live. I'm applying a date cutoff of 12/31/24 by the way.

Here, dear reader, are the first five links. I wish I were lying about this:

- https://sciencebusiness.net/news/ai/scientists-grapple-risk-...

- https://www.governance.ai/analysis/managing-risks-from-ai-en...

- https://gssr.georgetown.edu/the-forum/topics/biosec/the-doub...

- https://www.vox.com/future-perfect/23820331/chatgpt-bioterro...

- https://www.reddit.com/r/ClaudeAI/comments/1de8qkv/awareness...

I don't know about you, but that counts as easy to me.

-----

> I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology.

I've been getting good programming and molecular biology results out of these back to GPT3.5.

I don't know what to tell you—if you really wanted to understand the importance, you'd know already.