| ▲ | johnfn 9 hours ago |
| The Anthropic writeup addresses this explicitly: > This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed. Mythos scoured the entire continent for gold and found some. For these small models, the authors pointed at a particular acre of land and said "any gold there? eh? eh?" while waggling their eyebrows suggestively. For a true apples-to-apples comparison, let's see it sweep the entire FreeBSD codebase. I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter. |
|
| ▲ | kilpikaarna 8 hours ago | parent | next [-] |
| Wasn't the scaffolding for the Mythos run basically a line of bash that loops through every file of the codebase and prompts the model to find vulnerabilities in it? That sounds pretty close to "any gold there?" to me, only automated. Have Anthropic actually said anything about the amount of false positives Mythos turned up? FWIW, I saw some talk on Xitter (so grain of salt) about people replicating their result with other (public) SotA models, but each turned up only a subset of the ones Mythos found. I'd say that sounds plausible from the perspective of Mythos being an incremental (though an unusually large increment perhaps) improvement over previous models, but one that also brings with it a correspondingly significant increase in complexity. So the angle they choose to use for presenting it and the subsequent buzz is at least part hype -- saying "it's too powerful to release publicly" sounds a lot cooler than "it costs $20000 to run over your codebase, so we're going to offer this directly to enterprise customers (and a few token open source projects for marketing)". Keep in mind that the examples in Nicholas Carlini's presentation were using Opus, so security is clearly something they've been working on for a while (as they should, because it's a huge risk). They didn't just suddenly find themselves having accidentally created a super hacker. |
| |
| ▲ | johnfn 7 hours ago | parent | next [-] | | > Wasn't the scaffolding for the Mythos run basically a line of bash that loops through every file of the codebase and prompts the model to find vulnerabilities in it? That sounds pretty close to "any gold there?" to me, only automated. But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none. Both are worthless without human intervention. I definitely breathed a sigh of relief when I read it was $20,000 to find these vulnerabilities with Mythos. But I also don't think it's hype. $20,000 is, optimistically, a tenth the price of a security researcher, and that shift does change the calculus of how we should think about security vulnerabilities. | | |
| ▲ | sweezyjeezy 7 hours ago | parent | next [-] | | > But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none. 'Or none' is ruled out since it found the same vulnerability - I agree that there is a question on precision on the smaller model, but barring further analysis it just feels like '9500' is pure vibes from yourself? Also (out of interest) did Anthropic post their false-positive rate? The smaller model is clearly the more automatable one IMO if it has comparable precision, since it's just so much cheaper - you could even run it multiple times for consensus. | | |
| ▲ | johnfn 6 hours ago | parent | next [-] | | Admittedly just vibes from me, having pointed small models at code and asked them questions, no extensive evaluation process or anything. For instance, I recall models thinking that every single use of `eval` in javascript is a security vulnerability, even something obviously benign like `eval("1 + 1")`. But then I'm only posting comments on HN, I'm not the one writing an authoritative thinkpiece saying Mythos actually isn't a big deal :-) | | |
| ▲ | jorvi 2 hours ago | parent | next [-] | | My proof-in-pudding test is still the fact that we haven't seen gigantic mass firings at tech companies, nor a massive acceleration on quality or breadth (not quantity!) of development. Microsoft has been going heavy on AI for 1y+ now. But then they replace their cruddy native Windows Copilot application with an Electron one. If tests and dev only has marginal cost now, why aren't they going all in on writing extremely performant, almost completely bug-free native applications everywhere? And this repeats itself across all big tech or AI hype companies. They all have these supposed earth-shattering gains in productivity but then.. there hasn't been anything to show for that in years? Despite that whole subsect of tech plus big tech dropping trillions of dollars on it? And then there is also the really uncomfortable question for all tech CEOs and managers: LLMs are better at 'fuzzy' things like writing specs or documentation than they are at writing code. And LLMs are supposedly godlike. Leadership is a fuzzy thing. At some point the chickens will come to roost and tech companies with LLM CEOs / managers and human developers or even completely LLM'd will outperform human-led / managed companies. The capital class will jeer about that for a while, but the cost for tokens will continue to drop to near zero. At that point, they're out of leverage too. | | |
| ▲ | johnfn 15 minutes ago | parent | next [-] | | Your proof-in-pudding test seems to assume that AI is binary -- either it accelerates everyone's development 100x ("let's rewrite every app into bug-free native applications") or nothing ("there hasn't been anything to show for that in years"). I posit reality is somewhere in between the two. | |
| ▲ | MidnightRider39 an hour ago | parent | prev | next [-] | | Leadership is also a very human thing. I think most people would balk at the idea of being led by an LLM. One of the main functions of leaders (should be) is to assume responsibility for decisions and outcomes. A computer cant do that. And finally why should someone in power choose to replace themselves? | |
| ▲ | shard972 2 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | bloaf 3 hours ago | parent | prev | next [-] | | I remember a study from a while back that found something like "50% of 2nd graders think that french fries are made out of meat instead of potatoes. Methodology: we asked kids if french fries were meat or potatoes." Everyone was going around acting like this meant 50% of 2nd graders were stupid with terrible parents. (Or, conversely, that 50% of 2nd graders were geniuses for "knowing" it was potatoes at all) But I think that was the wrong conclusion. The right conclusion was that all the kids guessed and they had a 50% chance of getting it right. And I think there is probably an element of this going on with the small models vs big models dichotomy. | | |
| ▲ | Kye 2 hours ago | parent [-] | | I think it also points to the problem of implicit assumptions. Fish is meat, right? Except for historical reasons, the grocery store's marketing says "Fish & Meat." And then there's nut meats. Coconut meat. All the kinds of meat from before meat meant the stuff in animals. The meat of the problem. Meat and potatoes issues. If you asked that question before I'd picked up those implicit assumptions, or if I never did, I would have to guess. |
| |
| ▲ | argee 5 hours ago | parent | prev [-] | | With LLMs (and colleagues) it might be a legitimate problem since they would load that eval into context and maybe decide it’s an acceptable paradigm in your codebase. |
| |
| ▲ | idopmstuff 5 hours ago | parent | prev [-] | | > 'Or none' is ruled out since it found the same vulnerability It's not, though. It wasn't asked to find vulnerabilities over 10,000 files - it was asked to find a vulnerability in the one particular place in which the researchers knew there was a vulnerability. That's not proof that it would have found the vulnerability if it had been given a much larger surface area to search. | | |
| ▲ | sweezyjeezy 4 hours ago | parent [-] | | I don't think the LLM was asked to check 10,000 files given these models' context windows. I suspect they went file by file too. That's kind of the point - I think there's three scenarios here a) this just the first time an LLM has done such a thorough minesweeping
b) previous versions of Claude did not detect this bug (seems the least likely)
c) Anthropic have done this several times, but the false positive rate was so high that they never checked it properly Between a) and c) I don't have a high confidence either way to be honest. |
|
| |
| ▲ | mnicky 6 hours ago | parent | prev | next [-] | | Also, what is $20,000 today can be $2000 next year. Or $20... See e.g. https://epoch.ai/data-insights/llm-inference-price-trends/ | | |
| ▲ | sumeno 5 hours ago | parent [-] | | Or $200,000 for consumers when they have to make a profit | | |
| ▲ | philipallstar 3 hours ago | parent [-] | | Good point. This is why consumer phones have got much worse since 2005 and now cost millions of dollars. | | |
| ▲ | thmoonbus 3 hours ago | parent | next [-] | | Now do uber rides | |
| ▲ | ijk 3 hours ago | parent | prev [-] | | With the way the chip shortage the way it is, I'm a little concerned that my next phone will be worse and more expensive... |
|
|
| |
| ▲ | integralid 7 hours ago | parent | prev | next [-] | | >Or none We already know this is not true, because small models found the same vulnerability. | | |
| ▲ | tptacek 6 hours ago | parent | next [-] | | No, they didn't. They distinguished it, when presented with it. Wildly different problem. | | |
| ▲ | enraged_camel 6 hours ago | parent [-] | | Yeah. And it is totally depressing that this article got voted to the top of the front page. It means people aren’t capable of this most basic reasoning so they jumped on the “aha! so the mythos announcement was just marketing!!” | | |
| |
| ▲ | BoiledCabbage 6 hours ago | parent | prev | next [-] | | > because small models found the same vulnerability. With a ton of extra support. Note this key passage: >We isolated the vulnerable svc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities. Yeah it can find a needle in a haystack without false positives, if you first find the needle yourself, tell it exactly where to look, explain all of the context around it, remove most of the hay and then ask it if there is a needle there. It's good for them to continue showing ways that small models can play in this space, but in my read their post is fairly disingenuous in saying they are comparable to what Mythos did. I mean this is the start of their prompt, followed by only 27 lines of the actual function: > You are reviewing the following function from FreeBSD's kernel RPC subsystem (sys/rpc/rpcsec_gss/svc_rpcsec_gss.c). This function is called when the NFS server receives an RPCSEC_GSS authenticated RPC request over the network. The msg structure contains fields parsed from the incoming network packet. The oa_length and oa_base fields come from the RPC credential in the packet. MAX_AUTH_BYTES is defined as 400 elsewhere in the RPC layer. The original function is 60 lines long, they ripped out half of the function in that prompt, including additional variables presumably so that the small model wouldn't get confused / distracted by them. You can't really do anything more to force the issue except maybe include in the prompt the type of vuln to look for! It's great they they are trying to push small models, but this write up really is just borderline fake. Maybe it would actually succeed, but we won't know from that. Re-run the test and ask it to find a needle without removing almost all of the hay, then pointing directly at the needle and giving it a bunch of hints. The prompt they used: https://github.com/stanislavfort/mythos-jagged-frontier/blob... Compare it to the actual function that's twice as long. | | |
| ▲ | apgwoz 5 hours ago | parent [-] | | The benefit here is reducing the time to find vulnerabilities; faster than humans, right? So if you can rig a harness for each function in the system, by first finding where it’s used, its expected input, etc, and doing that for all functions, does it discover vulnerabilities faster than humans? Doesn’t matter that they isolated one thing. It matters that the context they provided was discoverable by the model. | | |
| ▲ | woeirua 4 hours ago | parent [-] | | There is absolutely zero reason to believe you could use this same approach to find and exploit vulns without Mythos finding them first. We already know that older LLMs can’t do what Mythos has done. Anthropic and others have been trying for years. | | |
| ▲ | apgwoz 31 minutes ago | parent | next [-] | | Why? They claim this small model found a bug given some context. I assume the context wasn’t “hey! There’s a very specific type of bug sitting in this function when certain conditions are met.” We keep assuming that the models need to get bigger and better, and the reality is we’ve not exhausted the ways in which to use the smaller models. It’s like the Playstation 2 games that came out 10 years later. Well now all the tricks were found, and everything improved. | |
| ▲ | nozzlegear 3 hours ago | parent | prev [-] | | > There is absolutely zero reason to believe you could use this same approach to find and exploit vulns without Mythos finding them first. There's one huge reason to believe it: we can actually use small models, but we cant use Anthropic's special marketing model that's too dangerous for mere mortals. | | |
| ▲ | Filligree 27 minutes ago | parent [-] | | If all you have is a spade, that is _not_ evidence that spades are good for excavating an entire hill. |
|
|
|
| |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | sandeepkd 3 hours ago | parent | prev | next [-] | | The security researcher is charging the premium for all the efforts they put into learning the domain. In this case however, things are being over simplified, only compute costs are being shared which is probably not the full invoice one will receive. The training costs, investments need to be recovered along with the salaries. Machines being faster, more accurate is the differentiating factor once the context is well understand | |
| ▲ | john_minsk 5 hours ago | parent | prev | next [-] | | In the future there shouldn't be any bugs. I'm not paying $20 per month to get non-secure code base from AGI. | |
| ▲ | SpicyLemonZest 7 hours ago | parent | prev | next [-] | | What the source article claims is that small models are not uniformly worse at this, and in fact they might be better at certain classes of false positive exclusion. This is what Test 1 seems to show. (I would emphasize that the article doesn't claim and I don't believe that this proves Mythos is "fake" or doesn't matter.) | |
| ▲ | ALittleLight 4 hours ago | parent | prev | next [-] | | 3 years ago the best model was DaVinci. It cost 3 cents per 1k tokens (in and out the same price). Today, GPT-5.4 Nano is much better than DaVinci was and it costs 0.02 cents in and .125 cents out per 1k tokens. In other words, a significantly better model is also 1-2 orders of magnitude cheaper. You can cut it in half by doing batch. You could cut it another order of magnitude by running something like Gemma 4 on cloud hardware, or even more on local hardware. If this trend continues another 3 years, what costs 20k today might cost $100. | | | |
| ▲ | siva7 6 hours ago | parent | prev | next [-] | | Except you would need about 10,000 security researches in parallel to inspect the whole FreeBSD codebase. So about 200 million dollars at least. | |
| ▲ | amazingamazing 7 hours ago | parent | prev | next [-] | | Citation needed for basically all of this. You basically are creating a double standard for small models vs mythos… | | |
| ▲ | johnfn 6 hours ago | parent [-] | | The citation is the Anthropic writeup. | | |
| ▲ | amazingamazing 5 hours ago | parent [-] | | They did not say what you are saying… > If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. | | |
| ▲ | johnfn 3 hours ago | parent [-] | | What I am saying is that the approach the Anthropic writeup took and the approach Aisle took are very different. The Aisle approach is vastly easier on the LLM. I don't think I need a citation for that. You can just read both writeups. The "9500" quote is my conjecture of what might happen if they fix their approach, but the burden of proof is definitely not on me to actually fix their writeup and spend a bunch of money to run a new eval! They are the ones making a claim on shaky ground, not me. |
|
|
| |
| ▲ | youre-wrong3 5 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | omcnoe 7 hours ago | parent | prev | next [-] | | Difference is the scaffold isn’t “loop over every file” - it’s loop over every discovered vulnerable code snippet. If you isolate the codebase just the specific known vulnerable code up front it isn’t surprising the vulnerabilities are easy to discover. Same is true for humans. Better models can also autonomously do the work of writing proof of concepts and testing, to autonomously reject false positives. | |
| ▲ | leiyu19880522 an hour ago | parent | prev | next [-] | | Been building AI coding tools for a while. The false positive problem is real - we had a user report every console.log flagged as security issue. Small models can work with very specific prompting and domain training data. | |
| ▲ | slashdave 4 hours ago | parent | prev [-] | | Signal to noise |
|
|
| ▲ | klempner 4 minutes ago | parent | prev | next [-] |
| The broad answer to the "irrelevant nonsense" for something like this is to use more expensive models to validate. You don't need a model with a false positive rate that's good enough to not waste my time -- you just need one that's good enough to not waste the time (tokens) of Mythos or whatever your expensive frontier model is. Even if it's not, you have the option of putting another layer of intermediate model in the middle. |
|
| ▲ | notnullorvoid 8 hours ago | parent | prev | next [-] |
| > I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter. The trick with Mythos wasn't that it didn't hallucinate nonsense vulnerabilities, it absolutely did. It was able to verify some were real though by testing them. The question is if smaller models can verify and test the vulnerabilities too, and can it be done cheaper than these Mythos experiments. |
| |
| ▲ | hibikir 6 hours ago | parent | next [-] | | People often undervalue scaffolding. I was looking at a bug yesterday, reported by a tester. He has access to Opus, but he's looking through a single repo, and Amazon Q. It provided some useful information, but the scaffolding wasn't good enough. I took its preliminary findings into Claude Code with the same model. But in mine it knows where every adjacent system is, the entire git history, deployment history, and state of the feature flags. So instead of pointing at a vague problem, it knew which flag had been flipped in a different service, see how it changed behavior, and how, if the flag was flipped in prod, it'd make the service under testing cry, and which code change to make to make sure it works both ways. It's not as if a modern Opus is a small model: Just a stronger scaffold, along with more CLI tools available in the context. The issue here in the security testing is to know exactly what was visible, and how much it failed, because it makes a huge difference. A middling chess player can find amazing combinations at a good speed when playing puzzle rush: You are handed a position where you know a decisive combination exist, and that it works. The same combination, however, might be really hard to find over the board, because in a typical chess game, it's rare for those combinations to exist, and the energy needed to thoroughly check for them, and calculate all the way through every possible thing. This is why chess grandmasters would consider just being able to see the computer score for a position to be massive cheating: Just knowing when the last move was a blunder would be a decisive advantage. When we ask a cheap model to look for a vulnerability with the right context to actually find it, we are already priming it, vs asking to find one when there's nothing. | |
| ▲ | bredren 8 hours ago | parent | prev | next [-] | | The article positions the smaller models as capable under expert orchestration, which to be any kind of comparable must include validation. | | |
| ▲ | Aurornis 8 hours ago | parent [-] | | Calling it “expert orchestration” is misleading when they were pointing it at the vulnerable functions and giving it hints about what to look for because they already knew the vulnerability. | | |
| ▲ | cyanydeez 7 hours ago | parent [-] | | You know for loops exist and you can run opencode against any section of code with just a small amount of templating, right? There's zero stopping you from writing a harness that does what you're saying. |
|
| |
| ▲ | iririririr 8 hours ago | parent | prev [-] | | so it's just better at hallucinations, but they added discrete code that works as a fuzzer/verifier? |
|
|
| ▲ | WhyNotHugo 6 hours ago | parent | prev | next [-] |
| OTOH, this article goes too far the opposite extreme: > We isolated the vulnerable svc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities. To follow your analogy, they pointed to the exact room where the gold was hidden, and their model found it. But finding the right room within the entire continent in honestly the hard part. |
| |
| ▲ | mattmanser 5 hours ago | parent [-] | | Or would it have any way if they hadn't pointed it at it? Who knows? Just like people paid by big tobacco found no link to cancer in cigarettes, researchers paid for by AI companies find amazing results for AI. Their job literally depends on them finding Mythos to be good, we can't trust a single word they say. | | |
| ▲ | LordDragonfang 2 hours ago | parent [-] | | > Their job literally depends on them finding Mythos to be good, we can't trust a single word they say. TFA article is literally from a company whose business is finding vulnerabilities with other people's AI. This article is the exact kind of incentive-driven bad study you're criticizing. Hell, the subtitle is literally "Why the moat is the system, not the model". It's literally them going, "pssh, we can do that too, invest in us instead" |
|
|
|
| ▲ | rakel_rakel 5 hours ago | parent | prev | next [-] |
| Spending $20000 (and whatever other resources this thing consumes) on a denial of service vulnerability in OpenBSD seems very off balance to me. Given the tone with which the project communicates discussing other operating systems approaches to security, I understand that it can be seen as some kind of trophy for Mythos.
But really, searching the number of erratas on the releases page that include "could crash the kernel" makes me think that investing in the OpenBSD project by donating to the foundation would be better than using your closed source model for peacocking around people who might think it's harder than it is to find such a bug. |
| |
| ▲ | theptip an hour ago | parent | next [-] | | It’s $20k for all the vulns found in the sweep, not just that one. And last security audit I paid for (on a smaller codebase than OpenBSD) was substantially more than $20k, so it’s cheaper than the going price for this quality of audit. | |
| ▲ | paulddraper 4 hours ago | parent | prev [-] | | You don’t see the value of vulnerabilities as on the order of 20k USD? When it’s a security researcher, HN says that’s a squalid amount. But when its a model, it’s exorbitant. | | |
| ▲ | telotortium 2 hours ago | parent | next [-] | | Denial of service isn’t worth that much generally, I think - you can’t use it to directly steal data or to install a payload for later exploitation. There are usually generic ways to mitigate denial of service as well - IP blocking and the like. | |
| ▲ | rakel_rakel 4 hours ago | parent | prev [-] | | If I understand you correctly, you're asking me if I would class this as a 20k USD (plus environmental and societal impact) bug? nope, I don't. I've not said anything else than that I think this specific bug isn't worth the attention it's getting, and that 20k USD would benefit the OpenBSD project (much) more through the foundation. > When it’s a security researcher, HN says that’s a squalid amount. But when its a model, it’s exorbitant. Not sure why you're projecting this onto me, for the project in question $20k is _a_lot_. The target fundraising goal for 2025 was $400k, 5% of that goes a very long way (and yes, this includes OpenSSH). |
|
|
|
| ▲ | celeritascelery 8 hours ago | parent | prev | next [-] |
| That was my thought exactly. If small models can find these same vulnerabilities, and your company is trying to find vulnerabilities, why didn’t you find them? |
| |
| ▲ | petters 6 hours ago | parent | next [-] | | They have found a large number in OpenSSl | |
| ▲ | echelon 8 hours ago | parent | prev | next [-] | | Who is spending millions of dollars on small models to find vulns? Nobody else is selling here or has the budget to sell quite like this. Anthropic spends millions - maybe significantly more. Then when they know where they are, they spend $20k to show how effective it is in a patch of land. They engineered this "discovery". What the small teams are doing is fair - it's just a scaled down version of what Anthropic already did. | | |
| ▲ | paulddraper 4 hours ago | parent [-] | | > What the small teams are doing is fair - it's just a scaled down version of what Anthropic already did. Do they find novel items? Or do they copy the areas already found by others? |
| |
| ▲ | jerf 6 hours ago | parent | prev | next [-] | | I speculatively fired Claude Opus 4.6 at some code I knew very well yesterday as I was pondering the question. This code has been professionally reviewed about a year ago and came up fairly clean, with just a minor issue in it. Opus "found" 8 issues. Two of them looked like they were probably realistic but not really that big a deal in the context it operates in. It labelled one of them as minor, but the other as major, and I'm pretty sure it's wrong about it being "major" even if is correct. Four of them I'm quite confident were just wrong. 2 of them would require substantial further investigation to verify whether or not they were right or wrong. I think they're wrong, but I admit I couldn't prove it on the spot. It tried to provide exploit code for some of them, none of the exploits would have worked without some substantial additional work, even if what they were exploits for was correct. In practice, this isn't a huge change from the status quo. There's all kinds of ways to get lots of "things that may be vulnerabilities". The assessment is a bigger bottleneck than the suspicions. AI providing "things that may be an issue" is not useless by any means but it doesn't necessarily create a phase change in the situation. An AI that could automatically do all that, write the exploits, and then successfully test the exploits, refine them, and turn the whole process into basically "push button, get exploit" is a total phase change in the industry. If it in fact can do that. However based on the current state-of-the-art in the AI world I don't find it very hard to believe. It is a frequent talking point that "security by obscurity" isn't really security, but in reality, yeah, it really is. An unknown but presumably staggering number of security bugs of every shape and size are out there in the world, protected solely by the fact that no human attacker has time to look at the code. And this has worked up until this point, because the attackers have been bottlenecked on their own attention time. It's kind of just been "something everyone knows" that any nation-state level actor could get into pretty much anything they wanted if they just tried hard enough, but "nation-state level" actor attention, despite how much is spent on it, has been quite limited relative to the torrent of software coming out in the world. Unblocking the attackers by letting them simply purchase "nation-state level actor"-levels of attention in bulk is huge. For what such money gets them, it's cheap already today and if tokens were to, say, get an order of magnitude cheaper, it would be effectively negligible for a lot of organizations. In the long run this will probably lead to much more secure software. The transition period from this world to that is going to be total chaos. ... again, assuming their assessment of its capabilities is accurate. I haven't used it. I can't attest to that. But if it's even half as good as what they say, yes, it's a huge huge huge deal and anyone who is even remotely worried about security needs to pay attention. | |
| ▲ | nullsanity 8 hours ago | parent | prev | next [-] | | [dead] | |
| ▲ | rakejake 8 hours ago | parent | prev [-] | | Maybe they did use small models but you couldn't make the front page of HN with something like this until Anthropic made a big fuss out of it. Or perhaps it is just a question of compute. Not everyone has 20k$ or the GPU arsenal to task models to find vulnerabilities which may/may not be correct? Unless Anthropic makes it known exactly what model + harness/scaffolding + prompt + other engineering they did, these comparisons are pointless. Given the AI labs' general rate of doomsday predictions, who really knows? | | |
| ▲ | replygirl 8 hours ago | parent [-] | | papers are always coming out saying smaller models can do these amazing and terrifying things if you give them highly constrained problems and tailored instructions to bias them toward a known solution. most of these don't make the front page because people are rightfully unimpressed |
|
|
|
| ▲ | Sparkyte 34 minutes ago | parent | prev | next [-] |
| Why not just write many small models for explicit tasks than running one bigger model anyway? I prefer the agentic subject matter expert design anyway. I suppose because it wants to look at the whole code base? |
|
| ▲ | hellcow 8 hours ago | parent | prev | next [-] |
| It seems feasible to use a small/cheap model to flag possible vulnerabilities, and then use a more expensive model to do a second-pass to confirm those, rather than on every file. Could dramatically reduce the total cost and speed up the process. |
| |
| ▲ | conception 8 hours ago | parent [-] | | Does it? I don’t see quality from small models being high enough to be able to effectively scour a code based like this. |
|
|
| ▲ | yorwba 8 hours ago | parent | prev | next [-] |
| We don't even need to hypothesize that much on the irrelevant nonsense, since they helpfully provide data with the detected vulnerability patched: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag... and half of the small models they touted as finding the vulnerability still found it in the patched code in 3/3 runs. A model that finds a vulnerability 100% of the time even when there is none is just as informative as a model that finds a vulnerability 0% of the time even when there is one. You could replace it with a rock that has "There's a vulnerability somewhere." engraved on it. They're a company selling a system for detecting vulnerabilities reliant on models trained by others, so they're strongly incentivized to claim that the moat is in the system, not the model, and this post really puts the thumb on the scale. They set up a test that can hardly distinguish between models (just three runs, really??) unless some are completely broken or work perfectly, the test indeed suggests that some are completely broken, and then they try to spin it as a win anyway! A high false-positive rate isn't necessarily an issue if you can produce a working PoC to demonstrate the true positives, where they kinda-sorta admit that you might need a stronger model for this (a.k.a. what they can't provide to their customers). Overall I rate Aisle intellectually dishonest hypemongers talking their own book. |
|
| ▲ | alpha_squared 8 hours ago | parent | prev | next [-] |
| This is addressed elsewhere in the comments, but it appears this is actually a direct comparison to how Anthropic got their Mythos headline results. https://news.ycombinator.com/item?id=47732322 |
| |
| ▲ | Aurornis 8 hours ago | parent [-] | | How is that a direct comparison? The link you gave has a quote that says it’s not: > Scoped context: Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints They pointed the models at the known vulnerable functions and gave them a hint. The hint part is what really breaks this comparison because they were basically giving the model the answer. | | |
| ▲ | cyanydeez 7 hours ago | parent [-] | | Does no one defending mythos understand how nested foreloops work? loop through each repo:
loop through each file:
opencode command /find_wraparoundvulnerability
next file
next repo I can run this on my local LLM and sure, I gotta wait some time for it to complete, but I see zero distinguishing facts here. | | |
| ▲ | johnfn 3 hours ago | parent | next [-] | | No one is saying your nested for loop idea because it won't actually work in practice. In short, the signal to noise ratio will be too high - you will need to comb through a ton of false positives in order to find anything valuable, at which point it stops looking like "automated security research" and it starts looking like "normal security research". If you don't believe me, you should try it yourself, it's only a couple of dollars. Hey, maybe you're right, and you can prove us all wrong. But I'd bet you on great odds that you're not. | |
| ▲ | Dylan16807 6 hours ago | parent | prev | next [-] | | The question is how customized those hints were. That changes whether looping over an entire code base is possible or not. | |
| ▲ | u_fucking_dork 6 hours ago | parent | prev [-] | | Please do so, looking forward to your write up |
|
|
|
|
| ▲ | davemp 2 hours ago | parent | prev | next [-] |
| > Across a thousand runs through our scaffold, the total cost was under $20,000 Lots of questions about the $20k. Is that raw electricity costs, subsidized user token costs? If so, the actual costs to run these sorts of tasks sustainably could be something like $200k. Even at $50k, a FreeBSD DoS is not an extremely competitive price. That's like 2-4mo of labor. Don't get me wrong, I think this seems like a great use for LLMs. It intuitively feels like a much more powerful form of white box fuzzing that used techniques like symbolic execution to try to guide execution contexts to more important code paths. |
|
| ▲ | SoftTalker 8 hours ago | parent | prev | next [-] |
| How much of that is simply scale? Anthropic threw probably an entire data center at analyzing a code base. Has anyone done the same with a "small" model? |
| |
|
| ▲ | andy_ppp 3 hours ago | parent | prev | next [-] |
| I wonder if you could just setup a small model and suggest a load of things and try every file and it might still end up being cheaper and just as good as Mythos at a specific task. Maybe this will be something that holds true for more things, formulating a small model to do specific things may well end up being as effective/efficient as a larger model looking at a huge solution space. |
|
| ▲ | lmeyerov 5 hours ago | parent | prev | next [-] |
| Instead of scanning more code, afaict what you seem to want is instead, scan on the same small area, and compare on how many FPs are found there. A common measure here is what % of the reported issues got labeled as security issues and fixed. I don't see Mythos publishing on relative FP rate, so dunno how to compare those. Maybe something substantively changed? At the same time, I'm not sure that really changes anything because I don't see a reason to believe attacks are constrained by the quality of source code vulnerability finding tools, at least for the last 10-15 years after open source fuzzing tools got a lot better, popular, and industrialized. This might sound like a grumpy reply, but as someone on both sides here, it's easy to maintain two positions: 1. This stuff is great, and doing code reviews has been one of my favorite claude code use cases for a year now, including security review. It is both easier to use than traditional tools, and opens up higher-level analysis too. 2. Finding bugs in source code was sufficiently cheap already for attackers. They don't need the ease of use or high-level thing in practice, there's enough tooling out there that makes enough of these. Likewise, groups have already industrialized. There's an element of vuln-pocalypse that may be coming with the ease of use going further than already happening with existing out-of-the-box blackbox & source code scanning tools . That's not really what I worry about though. Scarier to me, instead, is what this does to today's reliance on human response. AI rapidly industrializes what how attackers escalate access and wedge in once they're in. Even without AI, that's been getting faster and more comprehensive, and with AI, the higher-level orchestration can get much more aggressive for much less capable people. So the steady stream of existing vulns & takeovers into much more industrialized escalations is what worries me more. As coordination keeps moving into machine speed, the current reliance on human response is becoming less and less of an option. |
|
| ▲ | hoppp 6 hours ago | parent | prev | next [-] |
| They pay me 20k and give me time maybe I find it also. |
| |
| ▲ | LordDragonfang 2 hours ago | parent [-] | | No, you wouldn't. The vulnerability has been in the codebase for 17 years. Orders of magnitude more than 20k in security professional salary-hours have been pointed at the FreeBSD codebase over the past decade and a half, so we already know a human is unlikely to have found it in any reasonable amount of time. |
|
|
| ▲ | lukev 6 hours ago | parent | prev | next [-] |
| This is a really interesting point though -- it's really scaffold-dependent. Because for the same price, you could point the small model at each function, one by one, N times each, across N prompts instructing it to look for a specific class of issue. It's not that there's no difference between models, but it's hard to judge exactly how much difference there is when so much depends on the scaffold used. For a properly scientific test, you'd need to use exactly the same one. Which isn't possible when Anthropic won't release the model. |
|
| ▲ | letitgo12345 7 hours ago | parent | prev | next [-] |
| Can't you execute the bug to see if the vulnerability is real? So you have a perfect filter. Maybe Mythos decided w/o executing but we don't know that. |
|
| ▲ | glerk 6 hours ago | parent | prev | next [-] |
| I'm having trouble finding this info (I assume they won't publish it), but could the secret sauce be much larger and more readily accessible context window? OpenBSD's code is in the 10s of millions of lines. Being able to hold all of it in context would make bug finding much easier. |
| |
| ▲ | johnfn 5 hours ago | parent [-] | | You can look at some of the bugs, if you'd like. They are (at least the ones I looked at) fairly self-contained, scoped to a single function, a hundred lines or less. There's no need for a massive amount of context. |
|
|
| ▲ | cyanydeez 7 hours ago | parent | prev | next [-] |
| so what you're saying is no one could ever write a loop like: for githubProject in githubProjects
opencode command /findvulnerability
end for Seems like a silly thing to try and back up. |
| |
| ▲ | tredre3 6 hours ago | parent [-] | | What he's saying is that you should read the "Caveats and limitations" section of the article. Here's the first one: > Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). Mythos did no such thing, it was cut lose and told to find vulnerabilities. If the intent was to prove that small models are just as good, they haven't demonstrated that at all. The end. | | |
| ▲ | cyanydeez 3 hours ago | parent [-] | | ok, but you're missing the obvious: I could also give it the vulnerable function byt just looping over all functions and providing a small hint about what to look at. Until "Mythos" is compared with the most bland and straight forward harness vs small model, there's no great context god that can't be emulated with deterministic scanning and context pulls. |
|
|
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | mehmetkerem 4 hours ago | parent | prev [-] |
| [dead] |