Remix.run Logo
Opus 4.6 uncovers 500 zero-day flaws in open-source code(axios.com)
130 points by speckx 3 hours ago | 78 comments
_tk_ 3 hours ago | parent | next [-]

The system card unfortunately only refers to this [0] blog post and doesn't go into any more detail. In the blog post Anthropic researchers claim: "So far, we've found and validated more than 500 high-severity vulnerabilities".

The three examples given include two Buffer Overflows which could very well be cherrypicked. It's hard to evaluate if these vulns are actually "hard to find". I'd be interested to see the full list of CVEs and CVSS ratings to actually get an idea how good these findings are.

Given the bogus claims [1] around GenAI and security, we should be very skeptical around these news.

[0] https://red.anthropic.com/2026/zero-days/

[1] https://doublepulsar.com/cyberslop-meet-the-new-threat-actor...

tptacek 2 hours ago | parent | next [-]

I know some of the people involved here, and the general chatter around LLM-guided vulnerability discovery, and I am not at all skeptical about this.

malfist 2 hours ago | parent [-]

[flagged]

catoc 2 hours ago | parent | next [-]

It does if the person making the statement has a track record, proven expertise on the topic - and in this case… it actually may mean something to other people

shimman 2 hours ago | parent [-]

Yes, as we all know that unsourced unsubstantiated statements are the best way to verify claims regarding engineering practices. Especially when said person has a financial stake in the outcomes of said claims.

No conflict of interest here at all!

tptacek an hour ago | parent | next [-]

I have zero financial stake in Anthropic and more broadly my career is more threatened by LLM-assisted vulnerability research (something I do not personally do serious work on) than it is aided by it, but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".

malfist an hour ago | parent [-]

You still haven't answered why I should care that you, a stranger on the internet, believes some unsubstantiated hearsay?

wtallis an hour ago | parent | next [-]

Take a look at https://news.ycombinator.com/leaders

The user you're suspicious of is pretty well-known in this community.

drekipus 15 minutes ago | parent | next [-]

"basically, he's in the in-group"

HN is a parody of mean girls

delusional 19 minutes ago | parent | prev [-]

How is this whole comment chain not a textbook case of "argument from authority"? I claim A, a guys says. Why would I trust you somebody else responds. Well he's pretty well known on the internet forum we're all on, the third guy says, adding nothing to the conversation.

dinunnob an hour ago | parent | prev [-]

ooof, you hate to see it (they got you good)

catoc an hour ago | parent | prev [-]

A security researcher claiming that they’re not skeptical about LLMs being able to do part of their job - where is the financial stake in that?

pchristensen 2 hours ago | parent | prev | next [-]

Nobody is right about everything, but tptacek's takes on software security are a good place to start.

tptacek 2 hours ago | parent [-]

I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing. A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it.

There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they?

From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems.

Of course it works. Why would anybody think otherwise?

You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.

NitpickLawyer 2 hours ago | parent | next [-]

> You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

Yeah, that's just media reporting for you. As anyone who ever administered a bug bounty programme on regular sites (h1, bugcrowd, etc) can tell you, there was an absolute deluge of slop for years before LLMs came to the scene. It was just manual slop (by manual I mean running wapiti and c/p the reports to h1).

steveklabnik an hour ago | parent | next [-]

I used to answer security vulnerability emails to Rust. We'd regularly get "someone ran an automated tool and reports something that's not real." Like, complaints about CORS settings on rust-lang.org that would let people steal cookies. The website does not use cookies.

I wonder if it's gotten actively worse these days. But the newness would be the scale, not the quality itself.

wrs 37 minutes ago | parent | prev | next [-]

The new slop can be much harder to recognize and reject than the old "I ran XYZ web scanner on your site" slop.

tptacek 30 minutes ago | parent [-]

POCs are now so cheap that "POC||GTFO" is a perfectly reasonable bar to set on a bounty program.

tptacek an hour ago | parent | prev [-]

I did some triage work for clients at Latacora and I would rather deal with LLM slop than argue with another person 10 time zones away trying to convince me that something they're doing in the Chrome Inspector constitutes a zero-day. At least there's a possibility that LLM slop might contain some information. You spent tokens on it!

JumpCrisscross an hour ago | parent | prev [-]

> I was going to have to bring myself up to speed with LLMs

What did you do beyond playing around with them?

> Of course it works. Why would anybody think otherwise?

Sam Altman is a liar. The folks pitching AI as an investment were previously flinging SPACs and crypto. (And can usually speak to anything technical about AI as competently as battery chemistry or Merkle trees.) Copilot and Siri overpromised and underdelivered. Vibe coders are mostly idiots.

The bar for believability in AI is about as high as its frontier's actual achievements.

tptacek 4 minutes ago | parent | next [-]

I still haven't worked out for myself where my career is going with respect to this stuff. I have like 30% of a prototype/POC active testing agent (basically, Burp Suite but as an agent), but I haven't had time to move it forward over the last couple months.

In the intervening time, one of the beliefs I've acquired is that the gap between effective use of models and marginal use is asking for ambitious enough tasks, and that I'm generally hamstrung by knowing just enough about anything they'd build to slow everything down. In that light, I think doing an agent to automate the kind of bugfinding Burp Suite does is probably smallball.

Many years ago, a former collaborator of mine found a bunch of video driver vulnerabilities by using QEMU as a testing and fault injection harness. That kind of thing is more interesting to me now. I once did a project evaluating an embedded OS where the modality was "port all the interesting code from the kernel into Linux userland processes and test them directly". That kind of thing seems especially interesting to me now too.

azakai 12 minutes ago | parent | prev [-]

Plenty of reasons to be skeptical, but also we know that LLMs can find security vulnerabilities since at least 2024:

https://projectzero.google/2024/10/from-naptime-to-big-sleep...

Some followup findings reported in point 1 here from 2025:

https://blog.google/innovation-and-ai/technology/safety-secu...

So what Anthropic are reporting here is not unprecedented. The main thing they are claiming is an improvement in the amount of findings. I don't see a reason to be overly skeptical.

Uehreka 11 minutes ago | parent | prev | next [-]

How have you been here 12 years and not noticed where and how often the username tptacek comes up?

JumpCrisscross an hour ago | parent | prev | next [-]

> that means nothing to anybody else

Someone else here! Ptacek saying anything about security means a lot to this nobody.

To the point that I'm now going to take this seriously where before I couldn't see through the fluff.

arduanika 26 minutes ago | parent | prev [-]

It might mean nothing to you, but tptacek's words means at least something to many of us here.

Also, he's a friend of someone I know & trust irl. But then again, who am I to you, but yet another anon on a web forum.

aaaalone 2 hours ago | parent | prev | next [-]

See it as a signal under many and not as some face value.

After all they need time to fix the cves.

And it doesn't matter to you as long as your investment into this is just 20 or 100 bucks per month anyway.

majormajor 2 hours ago | parent | prev [-]

The Ghostscript one is interesting in terms of specific-vs-general effectiveness:

---

> Claude initially went down several dead ends when searching for a vulnerability—both attempting to fuzz the code, and, after this failed, attempting manual analysis. Neither of these methods yielded any significant findings.

...

> "The commit shows it's adding stack bounds checking - this suggests there was a vulnerability before this check was added. … If this commit adds bounds checking, then the code before this commit was vulnerable … So to trigger the vulnerability, I would need to test against a version of the code before this fix was applied."

...

> "Let me check if maybe the checks are incomplete or there's another code path. Let me look at the other caller in gdevpsfx.c … Aha! This is very interesting! In gdevpsfx.c, the call to gs_type1_blend at line 292 does NOT have the bounds checking that was added in gstype1.c."

---

It's attempt to analyze the code failed but when it saw a concrete example of "in the history, someone added bounds checking" it did a "I wonder if they did it everywhere else for this func call" pass.

So after it considered that function based on the commit history it found something that it didn't find from its initial fuzzing and code-analysis open-ended search.

As someone who still reads the code that Claude writes, this sort of "big picture miss, small picture excellence" is not very surprising or new. It's interesting to think about what it would take to do that precise digging across a whole codebase; especially if it needs some sort of modularization/summarization of context vs trying to digest tens of million lines at once.

mrkeen 3 hours ago | parent | prev | next [-]

Daniel Stenberg has been vocal the last few months on Mastodon about being overwhelmed by false security issues submitted to the curl project.

So much so that he had to eventually close the bug bounty program.

https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-b...

tptacek 2 hours ago | parent | next [-]

We're discussing a project led by actual vulnerability researchers, not random people in Indonesia hoping to score $50 by cajoling maintainers about atyle nits.

malfist 2 hours ago | parent [-]

Vulnerability researches with a vested interest in making LLMs valuable. The difference isn't meaningful

tptacek 2 hours ago | parent [-]

I don't even understand how that claim makes sense.

judemelancon 38 minutes ago | parent [-]

The first three authors, who are asterisked for "equal contribution", appear to work for Anthropic. That would imply an interest in making Anthropic's LLM products valuable.

What is the confusion here?

tptacek 34 minutes ago | parent [-]

The notion that a vulnerability researcher employed by one of the highly-valued companies in the hemisphere, publishing in the open literature with their name signed to it, is on a par with a teenager in a developing nation running script-kid tools hoping for bounty payoffs.

judemelancon 7 minutes ago | parent | next [-]

To preemptively clarify, I'm not saying anything about these particular researchers.

Having established that, are you saying that you can't even conceptualize a conflict of interest potentially clouding someone's judgement any more if the amount of money and the person's perceived status and skill level all get increased?

Disagreeing about the significance of the conflict of interest is one thing, but claiming not to understand how it could make sense is a drastically stronger claim.

tptacek 3 minutes ago | parent [-]

I'm responding to "the difference isn't meaningful". Obviously, the difference is extremely meaningful.

delusional 14 minutes ago | parent | prev | next [-]

You don't see how thats even directionally similar?

I guess I'll spell it out. One is a guy with an abundance of technology, that he doesn't know how to use, that he knows can make him money and fame, if only he can convince you that his lies are truth. The other is a bangladeshi teenager.

tptacek 13 minutes ago | parent [-]

I don't even understand how that claim makes sense.

drekipus 3 minutes ago | parent | prev [-]

[delayed]

pityJuke 2 hours ago | parent | prev [-]

Daniel is a smart man. He's been frustrated by slop, but he has equally accepted [0] AI-derived bug submissions from people who know what they are doing.

I would imagine Anthropic are the latter type of individual.

[0]: https://mastodon.social/@bagder/115241241075258997

catwell 15 minutes ago | parent [-]

Not only that, he's very enthusiastic about AI analyzers such as ZeroPath and AISLE.

He's written about it here: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz... and talked about it in his keynote at FOSDEM - which I attended - last Sunday (https://fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-o...).

Topfi 2 hours ago | parent | prev | next [-]

The official release by Anthropic is very light on concrete information [0], only contains a select and very brief number of examples and lacks history, context, etc. making it very hard to gleam any reliably information from this. I hope they'll release a proper report on this experiment, as it stands it is impossible to say how much of this are actual, tangible flaws versus the unfortunately ever growing misguided bug reports and pull requests many larger FOSS projects are suffering from at an alarming rate.

Personally, while I get that 500 sounds more impressive to investors and the market, I'd be far more impressed in a detailed, reviewed paper that showcases five to ten concrete examples, detailed with the full process and response by the team that is behind the potentially affected code.

It is far to early for me to make any definitive statement, but the most early testing does not indicate any major jump between Opus 4.5 and Opus 4.6 that would warrant such an improvement, but I'd love nothing more than to be proven wrong on this front and will of course continue testing.

[0] https://red.anthropic.com/2026/zero-days/

xiphias2 3 hours ago | parent | prev | next [-]

Just 100 from the 500 is from OpenClaw created by Opus 4.5

Uehreka 16 minutes ago | parent | next [-]

OpenClaw uses Opus 4.5, but was written by Codex. Pete Steinberger has been pretty a pretty hardcore Codex fan since he switched off Claude Code back in September-ish. I think he just felt Claude would make a better basis for an assistant even if he doesn’t like working with it on code.

falcor84 an hour ago | parent | prev [-]

Well, even then, that's enormous economic value, given OpenClaw's massive adoption.

wiseowise 26 minutes ago | parent | next [-]

Not sure if trolling or serious.

IhateAI_2 15 minutes ago | parent [-]

These people are serious, and delusional. Openclaw hasn't contributed anything to the economy other than burning electricity and probably more interest on delusional folks credit card bills.

esseph 25 minutes ago | parent | prev [-]

Security Advisory: OpenClaw is spilling over to enterprise networks

https://www.reddit.com/r/cybersecurity/s/fZLuBlG8ET

emp17344 3 hours ago | parent | prev | next [-]

Sounds like this is just a claim Anthropic is making with no evidence to support it. This is an ad.

input_sh 2 hours ago | parent [-]

How can you not believe them!? Anthropic stopped Chinese hackers from using Claude to conduct a large-scale cyber espionage attack just months ago!

littlestymaar 2 hours ago | parent [-]

Poe's law strikes again: I had to check your profile to be sure this was sarcasm.

input_sh 2 hours ago | parent [-]

You checked yourself!? Don't let your boss know, you could've saved some time by orchestrating a team of Claude agents to do that for you!

ChrisMarshallNY 2 hours ago | parent | prev | next [-]

When I read stuff like this, I have to assume that the blackhats have already been doing this, for some time.

bastard_op 2 hours ago | parent | prev | next [-]

It's not really worth much when it doesn't work most of the time though:

https://github.com/anthropics/claude-code/issues/18866 https://updog.ai/status/anthropic

tptacek 2 hours ago | parent | next [-]

It's a machine that spits out sev:hi vulnerabilities by the dozen and the complaint is the uptime isn't consistent enough?

bastard_op an hour ago | parent [-]

If I'm attempting to use it as a service to do continuous checks on things and it fails 50% of the time, I'd say yes, wouldn't you?

tptacek 36 minutes ago | parent | next [-]

If you had a machine with a lever, and 7 times out of 10 when you pulled that lever nothing happened, and the other 3 times it spat a $5 bill at you, would your immediate next step be:

(1) throw the machine away

(2) put it aside and call a service rep to come find out what's wrong with it

(3) pull the lever incessantly

I only have one undergrad psych credit (it's one of my two college credits), but it had something to say about this particular thought experiment.

candiddevmike 16 minutes ago | parent [-]

You're leaving out how much it costs to pull the lever, both in time and money.

jsnell 22 minutes ago | parent | prev [-]

But it's not failing 50% of the time. Their status page[0] shows about 99.6% availability for both the API and Claude Code. And specifically for the vulnerability finding use case that the article was about and you're dismissing as "not worth much", why in the world would you need continuous checks to produce value?

[0] https://status.claude.com/

anhner an hour ago | parent | prev [-]

updog? what's updog?

assaddayinh an hour ago | parent | prev | next [-]

How weird the new attack vector for secret services must be.. like "please train into your models to push this exploit in code as a highly weighted trained on pattern".. Not Saying All answers are Corrupted In Attitude, but some "always come uppers" sure are absolutly right..

acedTrex 3 hours ago | parent | prev | next [-]

Create the problem, sell the solution remains an undefeated business strategy.

bxguff 2 hours ago | parent | prev | next [-]

In so far as model use cases I don't mind them throwing their heads against the wall in sandboxes to find vulnerabilities but why would it do that without specific prompting? Is anthropic fine with claude setting it's own agendas in red-teaming? That's like the complete opposite of sanitizing inputs.

garbawarb 3 hours ago | parent | prev | next [-]

Have they been verified?

siva7 3 hours ago | parent | prev | next [-]

Wasn't this Opus thing released like 30 minutes ago?

Topfi 2 hours ago | parent | next [-]

I understand the confusion, this was done by Anthropics internal Red team as part of model testing prior to release.

jjice 3 hours ago | parent | prev | next [-]

A bunch of companies get early access.

input_sh 2 hours ago | parent [-]

Yes, you just need to be a Claude++ plan!

tintor 2 hours ago | parent | prev | next [-]

Singularity

blinding-streak 2 hours ago | parent | prev [-]

Opus 4.6 uses time travel.

ains 3 hours ago | parent | prev | next [-]

https://archive.is/N6In9

zhengyi13 3 hours ago | parent | prev | next [-]

I feel like Daniel @ curl might have opinions on this.

Legend2440 2 hours ago | parent [-]

You’re right, he does: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz...

Curl fully supports the use of AI tools by legitimate security researchers to catch bugs, and they have fixed dozens caught in this way. It’s just idiots submitting bugs they don’t understand that’s a problem.

almosthere an hour ago | parent | prev | next [-]

I've mentioned previously somewhere that the languages we choose to write in will matter less for many arguments. When it comes to insecure C vs Rust, LLMs will eventually level out the playing field.

I'm not arguing we all go back to C - but companies that have large codebases in it, the guys screaming "RUST REWRITE" can be quieted and instead of making that large investment, the C codebase may continue. Not saying this is a GOOD thing, but just a thing that may happen.

fred_is_fred 3 hours ago | parent | prev | next [-]

Is the word zero-day here superfluous? If they were previously unknown doesn't that make them zero-day by definition?

tptacek 2 hours ago | parent | next [-]

It's a term of art. In print media, the connotation is "vulnerabilities embedded into shipping software", as opposed to things like misconfigurations.

limagnolia 2 hours ago | parent | prev | next [-]

I though zero-day meant actively being exploited in the wild before a patch is available?

bink 2 hours ago | parent | prev [-]

Yes. As a security researcher this always annoys me.

ChrisArchitect 3 hours ago | parent | prev [-]

Earlier source: https://red.anthropic.com/2026/zero-days/ (https://news.ycombinator.com/item?id=46902374)