Remix.run Logo
latexr 2 hours ago

> The whole artificial scarcity Anthropic created around Mythos / Glasswing is quite brilliant to be honest

Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.

https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf

__MatrixMan__ 33 minutes ago | parent | next [-]

They way they've published hashes of the bugs it has found so that once those bugs are fixed they can responsibly disclose them while also proving that they weren't lying... that displays a willingness to dabble in evidence which is far beyond anything OpenAI has done to support their claims.

xiphias2 an hour ago | parent | prev | next [-]

It was from GPT-2 and Dario was part of the developers of that model while he was working in OpenAI, not Sam Altman, it's his playbook

latexr an hour ago | parent | next [-]

> It was from GPT-2

Prior to the released of GPT-5, Sam said he was scared of it and compared it to the Manhattan Project.

nipponese an hour ago | parent [-]

Not just Altman. Buffett said it also, more generally.

https://youtu.be/vZlMWF6iFZg

foobar_______ an hour ago | parent | prev | next [-]

Thank you. People are currently getting a hard-on claiming Anthropic are the 'good guys' and don't stop to actually look around and see what is going on and how both companies got here.

kordlessagain an hour ago | parent | prev | next [-]

This is pretty much correct, but Mustafa Suleyman has probably been doing it longer.

Hamuko an hour ago | parent | prev [-]

Not just part of the developers, but rather "led the development of large language models like GPT-2 and GPT-3" as per his website.

https://darioamodei.com/

Filligree 2 hours ago | parent | prev [-]

Anthropic has not in fact released it, and it does in fact appear to be that dangerous, judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg.

Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.

mccr8 an hour ago | parent | next [-]

The flood of reports that open source projects like curl, Linux and Chromium are getting are presumably due to public models like Open 4.6 that released earlier this year, and not models with limited availability.

amarcheschi 2 hours ago | parent | prev | next [-]

How many months till they release a better model than mythos to general audience?

Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1

Hizonner an hour ago | parent [-]

A few months of restricting access to people they think will actually fix problems is a big deal. Obviously only an idiot would think it could or should be kept under wraps forever.

an hour ago | parent | prev | next [-]
[deleted]
kordlessagain an hour ago | parent | prev | next [-]

Those vulnerabilities were found by open models as well.

abustamam an hour ago | parent | next [-]

Partly true. I think the consensus was it wasn't comparable because Mythos swept the entire codebase and found the vulnerabilities, whereas the open models were told where to look for said vulnerabilities.

https://news.ycombinator.com/item?id=47732337

mccr8 an hour ago | parent | prev [-]

Not really. The models were pointed specifically at the location of the vulnerability and given some extra guidance. That's an easier problem than simply being pointed at the entire code base.

embedding-shape 2 hours ago | parent | prev [-]

> judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg

Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?

rhdunn an hour ago | parent | next [-]

Some relevant links:

[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...

> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.

> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.

[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...

> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.

StrauXX 2 hours ago | parent | prev | next [-]

He has changed his opinion completely. Yes, the ratio has turned.

depr 2 hours ago | parent | prev [-]

Yes:

> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

> I'm spending hours per day on this now. It's intense.

https://mastodon.social/@bagder/116336957584445742