| ▲ | bonsai_spool 2 hours ago | |
>>It's very easy to learn more about this if it's seriously a question you have. >No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now This reads as defensive. The thing that is easy to learn is 'why are biology ai LLMs dangerous chatgpt claude'. I have never googled this before, so I'll do this with the reader, live. I'm applying a date cutoff of 12/31/24 by the way. Here, dear reader, are the first five links. I wish I were lying about this: - https://sciencebusiness.net/news/ai/scientists-grapple-risk-... - https://www.governance.ai/analysis/managing-risks-from-ai-en... - https://gssr.georgetown.edu/the-forum/topics/biosec/the-doub... - https://www.vox.com/future-perfect/23820331/chatgpt-bioterro... - https://www.reddit.com/r/ClaudeAI/comments/1de8qkv/awareness... I don't know about you, but that counts as easy to me. ----- > I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology. I've been getting good programming and molecular biology results out of these back to GPT3.5. I don't know what to tell you—if you really wanted to understand the importance, you'd know already. | ||