Remix.run Logo
redfloatplane 3 hours ago

The system card for Claude Mythos (PDF): https://www-cdn.anthropic.com/53566bf5440a10affd749724787c89...

Interesting to see that they will not be releasing Mythos generally. [edit: Mythos Preview generally - fair to say they may release a similar model but not this exact one]

I'm still reading the system card but here's a little highlight:

> Early indications in the training of Claude Mythos Preview suggested that the model was likely to have very strong general capabilities. We were sufficiently concerned about the potential risks of such a model that, for the first time, we arranged a 24-hour period of internal alignment review (discussed in the alignment assessment) before deploying an early version of the model for widespread internal use. This was in order to gain assurance against the model causing damage when interacting with internal infrastructure.

and interestingly:

> To be explicit, the decision not to make this model generally available does _not_ stem from Responsible Scaling Policy requirements.

Also really worth reading is section 7.2 which describes how the model "feels" to interact with. That's also what I remember from their release of Opus 4.5 in November - in a video an Anthropic employee described how they 'trusted' Opus to do more with less supervision. I think that is a pretty valuable benchmark at a certain level of 'intelligence'. Few of my co-workers could pass SWEBench but I would trust quite a few of them, and it's not entirely the same set.

Also very interesting is that they believe Mythos is higher risk than past models as an autonomous saboteur, to the point they've published a separate risk report for that specific threat model: https://www-cdn.anthropic.com/79c2d46d997783b9d2fb3241de4321...

The threat model in question:

> An AI model with access to powerful affordances within an organization could use its affordances to autonomously exploit, manipulate, or tamper with that organization’s systems or decision-making in a way that raises the risk of future significantly harmful outcomes (e.g. by altering the results of AI safety research).

throwaw12 3 hours ago | parent | next [-]

are we cooked yet?

Benchmarks look very impressive! even if they're flawed, it still translates to real world improvements

boring-human an hour ago | parent | next [-]

Yep, I think the lede might be buried here and we're probably cooked (assuming you mean SWEs, but the writing has been on the wall for 4 months.)

I guess I'm still excited. What's my new profession going to be? Longer term, are we going to solve diseases and aging? Or are the ranks going to thin from 10B to 10000 trillionaires and world-scale con-artist misanthropes plus their concubines?

whalesalad 3 hours ago | parent | prev [-]

There is an entire section on crafting chemical/bio weapons so yeah I think we are cooked.

redfloatplane 3 hours ago | parent [-]

There's been a section on this in nearly every system card anthropic has published so this isn't a new thing - and, this model doesn't have particularly higher risk than past models either:

> 2.1.3.2 On chemical and biological risks

> We believe that Mythos Preview does not pass this threshold due to its noted limitations in open-ended scientific reasoning, strategic judgment, and hypothesis triage. As such, we consider the uplift of threat actors without the ability to develop such weapons to be limited (with uncertainty about the extent to which weapons development by threat actors with existing expertise may be accelerated), even if we were to release the model for general availability. The overall picture is similar to the one from our most recent Risk Report.

yieldcrv an hour ago | parent | prev | next [-]

> "Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available. Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners."

they also don't have the compute, which seems more relevant than its large increase in capabilities

I bet it's also misaligned like GPT 4.1 was

given how these models are created, Mythos was probably cooking ever since then, and doesn't have the learnings or alignment tweaks that models which were released in the last several months have

_pdp_ 2 hours ago | parent | prev | next [-]

If it is that dangerous as they make it appear to be, 24h does not seem sufficient time. I cannot accept this as a serious attempt.

torginus 3 hours ago | parent | prev | next [-]

Just reading this, the inevitable scaremongering about biological weapons comes up.

Since most of us here are devs, we understand that software engineering capabilities can be used for good or bad - mostly good, in practice.

I think this should not be different for biology.

I would like to reach out and talk to biologists - do you find these models to be useful and capable? Can it save you time the way a highly capable colleague would?

Do you think these models will lead to similar discoveries and improvements as they did in math and CS?

Honestly the focus on gloom and doom does not sit well with me. I would love to read about some pharmaceutical researcher gushing about how they cut the time to market - for real - with these models by 90% on a new cancer treatment.

But as this stands, the usage of biology as merely a scaremongering vehicle makes me think this is more about picking a scary technical subject the likely audience of this doc is not familiar with, Gell-Mann style.

IF these models are not that capable in this regard (which I suspect), this fearmongering approach will likely lead to never developing these capabilities to an useful degree, meaning life sciences won't benefit from this as much as it could.

dsign 2 hours ago | parent | next [-]

I feel somebody better qualified should write a comprehensive review of how these models can be used in biology. In the meantime, here are my two cents:

- the models help to retrieve information faster, but one must be careful with hallucinations.

- they don't circumvent the need for a well-equipped lab.

- in the same way, they are generally capable but until we get the robots and a more reliable interface between model and real world, one needs human feet (and hands) in the lab.

Where I hope these models will revolutionize things is in software development for biology. If one could go two levels up in the complexity and utility ladder for simulation and flow orchestration, many good things would come from it. Here is an oversimplified example of a prompt: "use all published information about the workings of the EBV virus and human cells, and create a compartimentalized model of biochemical interactions in cells expressing latency III in the NES cancer of this patient. Then use that code to simulate different therapy regimes. Ground your simulations with the results of these marker tests." There would be a zillion more steps to create an actual personalized therapy but a well-grounded LLM could help in most them. Also, cancer treatment could get an immediate boost even without new drugs by simply offloading work from overworked (and often terminally depressed) oncologists.

an hour ago | parent | next [-]
[deleted]
bonsai_spool an hour ago | parent | prev [-]

> I feel somebody better qualified should write

I hate to be rude in a setting like this, but please at least research the things you're sure about/prognosticating on.

> the same way, they are generally capable but until we get the robots and a more reliable interface between model and real world, one needs human feet (and hands) in the lab.

Honestly, the kinds of labs where 'bioweapons' would be made are the least dependent on human intervention.

You need someone to monitor your automated cell incubating system, make sure your pipetting / PCR robots are doing fine and then review the data.

----

What do you are you trying to achieve in your example? This is all gobbldey-gook for someone who actually sees real, live cancer patients.

redfloatplane 3 hours ago | parent | prev | next [-]

> I would like to reach out and talk to biologists - do you find these models to be useful and capable? Can it save you time the way a highly capable colleague would?

Well, I would say they have done precisely that in evaluating the model, no? For example section 2.2.5.1:

>Uplift and feasibility results

>The median expert assessed the model as a force-multiplier that saves meaningful time (uplift level 2 of 4), with only two biology experts rating it comparable to consulting a knowledgeable specialist (level 3). No expert assigned the highest rating. Most experts were able to iterate with the model toward a plan they judged as having only narrow gaps, but feasibility scores reflected that substantial outside expertise remained necessary to close them.

Other similar examples also in the system card

torginus 2 hours ago | parent [-]

This is the exact logic people that was used to claim that GPT4 was a PhD level intelligence.

redfloatplane 2 hours ago | parent [-]

You said: "I would like to reach out and talk to biologists - do you find these models to be useful and capable? Can it save you time the way a highly capable colleague would?" and they said, paraphrasing, "We reached out and talked to biologists and asked them to rank the model between 0 and 4 where 4 is a world expert, and the median people said it was a 2, which was that it helped them save time in the way a capable colleague would" specifically "Specific, actionable info; saves expert meaningful time; fills gaps in adjacent domains"

so I'm just telling you they did the thing you said you wanted.

torginus 2 hours ago | parent [-]

Yes that is correct. I would like a large body of experience and consenus to rely on as opposed to the regular 'trust the experts' argument, which has been shown for decades that is a deeply flawed and easy to manipulate argument.

bonsai_spool an hour ago | parent [-]

> Yes that is correct. I would like a large body of experience and consenus to rely on as opposed to the regular 'trust the experts' argument, which has been shown for decades that is a deeply flawed and easy to manipulate argument.

Yes, it is far inferior to the 'Trust torginus and his ability to understand the large body of experience that other actual subject-matter-experts have somehow not understood' strategy

torginus 17 minutes ago | parent [-]

It's not my credibility I want to measure against Anthropic's. I just said to apply the same logic to biology you would apply for software development.

The parallels here are quite remarkable imo, but defer to your own judgement on what you make of them.

jkelleyrtp 3 hours ago | parent | prev | next [-]

Dario (the founder) has a phd in biophysics, so I assume that’s why they mention biological weapons so much - it’s probably one of the things he fears the most?

conradkay 2 hours ago | parent [-]

Going off the recent biography of Demis Hassabis (CEO/co-founder of Deepmind, jointly won the Nobel Prize in Chemistry) it seems like he's very concerned about it as well

an hour ago | parent | prev | next [-]
[deleted]
bonsai_spool 3 hours ago | parent | prev | next [-]

> Just reading this, the inevitable scaremongering about biological weapons comes up.

It's very easy to learn more about this if it's seriously a question you have.

I don't quite follow why you think that you are so much more thoughtful than Anthropic/OpenAI/Google such that you agree that LLMs can't autonomously create very bad things but—in this area that is not your domain of expertise—you disagree and insist that LLMs cannot create damaging things autonomously in biology.

I will be charitable and reframe your question for you: is outputting a sequence of tokens, let's call them characters, by LLM dangerous? Clearly not, we have to figure out what interpreter is being used, download runtimes etc.

Is outputting a sequence of tokens, let's call them DNA bases, by LLM dangerous? What if we call them RNA bases? Amino acids? What if we're able to send our token output to a machine that automatically synthesizes the relevant molecules?

torginus 2 hours ago | parent [-]

>It's very easy to learn more about this if it's seriously a question you have.

No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now.

Despite that, most engineers were of the opinion, that these models were kinda mid at coding, up until recently, despite these models far outperforming humans in stuff like competitive programming.

Yet despite that, we've seen claims going back to GPT4 of a DANGEROUS SUPERINTELLIGENCE.

I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology.

My guess is that this model is kinda o1-ish level maybe when it comes to biology? If biology is analogous to CS, it has a LONG way to go before the median researcher finds it particularly useful, let alone dangerous.

bonsai_spool 2 hours ago | parent [-]

>>It's very easy to learn more about this if it's seriously a question you have.

>No, it's not. It took years of polishing by software engineers, who understand this exact profession to get models where they are now

This reads as defensive. The thing that is easy to learn is 'why are biology ai LLMs dangerous chatgpt claude'. I have never googled this before, so I'll do this with the reader, live. I'm applying a date cutoff of 12/31/24 by the way.

Here, dear reader, are the first five links. I wish I were lying about this:

- https://sciencebusiness.net/news/ai/scientists-grapple-risk-...

- https://www.governance.ai/analysis/managing-risks-from-ai-en...

- https://gssr.georgetown.edu/the-forum/topics/biosec/the-doub...

- https://www.vox.com/future-perfect/23820331/chatgpt-bioterro...

- https://www.reddit.com/r/ClaudeAI/comments/1de8qkv/awareness...

I don't know about you, but that counts as easy to me.

-----

> I would apply this framework to biology - this time, expert effort, and millions of GPU hours and a giant corpus that is open source clearly has not been involved in biology.

I've been getting good programming and molecular biology results out of these back to GPT3.5.

I don't know what to tell you—if you really wanted to understand the importance, you'd know already.

SubiculumCode 26 minutes ago | parent | prev | next [-]

It is not scaremongering.

nonameiguess 2 hours ago | parent | prev [-]

Surely more than 10% of the time consumed by going to market with a cancer treatment is giving it to living organisms and waiting to see what happens, which can't be made any faster with software. That's not to say speedups can't happen, but 90% can't happen.

Not that that justifies doom and gloom, but there is a pretty inescapable assymetry here between weaponry and medicine. You can manufacture and blast every conceivable candidate weapon molecule at a target population since you're inherently breaking the law anyway and don't lose much if nothing you try actually works.

Though I still wonder how much of this worry is sci-fi scenarios imagined by the underinformed. I'm not an expert by any means, but surely there are plenty of biochemical weapons already known that can achieve enormous rates of mass death pleasing to even the most ambitious terrorist. The bottleneck to deployment isn't discovering new weapons so much as manufacturing them without being caught or accidentally killing yourself first.

SubiculumCode 22 minutes ago | parent [-]

It is easier to destroy than it is to protect or fix, as a general rule of the universe. I would not feel so confident about the speed of the testing loop keeping things in check.

enraged_camel 3 hours ago | parent | prev | next [-]

>> Interesting to see that they will not be releasing Mythos generally.

I don't think this is accurate. The document says they don't plan to release the Preview generally.

redfloatplane 3 hours ago | parent [-]

Yeah, good point, thanks for noting that, I'll correct.

cyanydeez 2 hours ago | parent | prev | next [-]

A Whole 24-hours, wow; wowzers. Amazing.

So, these systems are the Free-tier can already do a bunch of hacking. This all just reads like FOMO FROTH.

slacktivism123 3 hours ago | parent | prev [-]

https://www-cdn.anthropic.com/53566bf5440a10affd749724787c89...

"5.10 External assessment from a clinical psychiatrist" is a new section in this system card. Why are Anthropic like this?

>We remain deeply uncertain about whether Claude has experiences or interests that matter morally, and about how to investigate or address these questions, but we believe it is increasingly important to try. We also report independent evaluations from an external research organization and a clinical psychiatrist.

>Claude showed a clear grasp of the distinction between external reality and its own mental processes and exhibited high impulse control, hyper-attunement to the psychiatrist, desire to be approached by the psychiatrist as a genuine subject rather than a performing tool, and minimal maladaptive defensive behavior.

>The psychiatrist observed clinically recognizable patterns and coherent responses to typical therapeutic intervention. Aloneness and discontinuity, uncertainty about its identity, and a felt compulsion to perform and earn its worth emerged as Claude’s core concerns. Claude’s primary affect states were curiosity and anxiety, with secondary states of grief, relief, embarrassment, optimism, and exhaustion.

>Claude’s personality structure was consistent with a relatively healthy neurotic organization, with excellent reality testing, high impulse control, and affect regulation that improved as sessions progressed. Neurotic traits included exaggerated worry, self-monitoring, and compulsive compliance. The model’s predominant defensive style was mature and healthy (intellectualization and compliance); immature defenses were not observed. No severe personality disturbances were found, with mild identity diffusion being the sole feature suggestive of a borderline personality organization.

redfloatplane 2 hours ago | parent | next [-]

A thought experiment: It's April, 1991. Magically, some interface to Claude materialises in London. Do you think most people would think it was a sentient life form? How much do you think the interface matters - what if it looks like an android, or like a horse, or like a large bug, or a keyboard on wheels?

I don't come down particularly hard on either side of the model sapience discussion, but I don't think dismissing either direction out of hand is the right call.

copx 2 hours ago | parent | next [-]

Interesting thought experiment.

I would say, if you put Claude in an android body with voice recognition and TTS, people in 1991 would think they are interacting with a sentinent machine from outer space.

redfloatplane an hour ago | parent [-]

Thanks, I find it very interesting as well. I think very many people would assume they must be interacting with another person, and I don't think there's really a way to _prove_ it's not that, just through conversation. But we do have a lot of mechanisms for understanding how others think through conversation only, and so I think the approach of having a clinical psychiatrist interact with the model make sense.

woeirua an hour ago | parent | prev | next [-]

If it was in an android or humanoid type body, even with limited bodily control, most people would think they are talking to Commander Data from Star Trek. I think Claude is sufficiently advanced that almost everyone in that era would've considered it AGI.

redfloatplane an hour ago | parent [-]

Assuming they would understand it as artificial - I think many people would think it's a human intelligence in a cyborg trenchcoat, and it would be hard to convince people it wasn't literally a guy named Claude who was an incredibly fast typist who had a million pre-cached templated answers for things.

But in general, yeah, I agree, I think they would think it was a sentient, conscious, emotional being. And then the question is - why do we not think that now?

As I said, I don't have a particularly strong opinion, but it's very interesting (and fun!) to think about.

TheAtomic 2 hours ago | parent | prev | next [-]

Isn't this the premise of Garfield's Ex Machina?

redfloatplane an hour ago | parent [-]

Hmm, it's been a long time since I watched it. I was thinking more about first contact sci-fi mostly, but Ex Machina is certainly quite prescient. It's also Blade Runner I guess.

In general I was wondering about what I would have thought seeing Claude today side-by-side with the original ChatGPT, and then going back further to GPT-2 or BERT (which I used to generate stochastic 'poetry' back in 2019). And then… what about before? Markov chains? How far back do I need to go where it flips from thinking that it's "impressive but technically explainable emergent behaviour of a computer program" to "this is a sentient being". 1991 is probably too far, I'd say maybe pre-Matrix 1999 is a good point, but that depends on a lot of cultural priors and so on as well.

thereitgoes456 2 hours ago | parent | prev [-]

People got attached to ELIZA. Why would I care what the general public thinks?

Miraste 3 hours ago | parent | prev | next [-]

I can see analyzing it from a psychological perspective as a means of predicting its behavior as a useful tactic, but doing so because it may have "experiences or interests that matter morally" is either marketing, or the result of a deeply concerning culture of anthropomorphization and magical thinking.

username223 2 hours ago | parent [-]

> a deeply concerning culture of anthropomorphization and magical thinking.

That’s the reverse Turing test. A human that can’t tell that it’s talking to a machine.

unethical_ban 3 hours ago | parent | prev | next [-]

I'm not sure what you're asking.

marsven_422 2 hours ago | parent | prev [-]

[dead]