| ▲ | bonsai_spool 3 hours ago | |||||||
> Just reading this, the inevitable scaremongering about biological weapons comes up. It's very easy to learn more about this if it's seriously a question you have. I don't quite follow why you think that you are so much more thoughtful than Anthropic/OpenAI/Google such that you agree that LLMs can't autonomously create very bad things but—in this area that is not your domain of expertise—you disagree and insist that LLMs cannot create damaging things autonomously in biology. I will be charitable and reframe your question for you: is outputting a sequence of tokens, let's call them characters, by LLM dangerous? Clearly not, we have to figure out what interpreter is being used, download runtimes etc. Is outputting a sequence of tokens, let's call them DNA bases, by LLM dangerous? What if we call them RNA bases? Amino acids? What if we're able to send our token output to a machine that automatically synthesizes the relevant molecules? | ||||||||
| ▲ | torginus 2 hours ago | parent [-] | |||||||
>It's very easy to learn more about this if it's seriously a question you have. No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now. Despite that, most engineers were of the opinion, that these models were kinda mid at coding, up until recently, despite these models far outperforming humans in stuff like competitive programming. Yet despite that, we've seen claims going back to GPT4 of a DANGEROUS SUPERINTELLIGENCE. I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology. My guess is that this model is kinda o1-ish level maybe when it comes to biology? If biology is analogous to CS, it has a LONG way to go before the median researcher finds it particularly useful, let alone dangerous. | ||||||||
| ||||||||