|
| ▲ | sweezyjeezy 7 hours ago | parent | next [-] |
| > But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none. 'Or none' is ruled out since it found the same vulnerability - I agree that there is a question on precision on the smaller model, but barring further analysis it just feels like '9500' is pure vibes from yourself? Also (out of interest) did Anthropic post their false-positive rate? The smaller model is clearly the more automatable one IMO if it has comparable precision, since it's just so much cheaper - you could even run it multiple times for consensus. |
| |
| ▲ | johnfn 6 hours ago | parent | next [-] | | Admittedly just vibes from me, having pointed small models at code and asked them questions, no extensive evaluation process or anything. For instance, I recall models thinking that every single use of `eval` in javascript is a security vulnerability, even something obviously benign like `eval("1 + 1")`. But then I'm only posting comments on HN, I'm not the one writing an authoritative thinkpiece saying Mythos actually isn't a big deal :-) | | |
| ▲ | jorvi 2 hours ago | parent | next [-] | | My proof-in-pudding test is still the fact that we haven't seen gigantic mass firings at tech companies, nor a massive acceleration on quality or breadth (not quantity!) of development. Microsoft has been going heavy on AI for 1y+ now. But then they replace their cruddy native Windows Copilot application with an Electron one. If tests and dev only has marginal cost now, why aren't they going all in on writing extremely performant, almost completely bug-free native applications everywhere? And this repeats itself across all big tech or AI hype companies. They all have these supposed earth-shattering gains in productivity but then.. there hasn't been anything to show for that in years? Despite that whole subsect of tech plus big tech dropping trillions of dollars on it? And then there is also the really uncomfortable question for all tech CEOs and managers: LLMs are better at 'fuzzy' things like writing specs or documentation than they are at writing code. And LLMs are supposedly godlike. Leadership is a fuzzy thing. At some point the chickens will come to roost and tech companies with LLM CEOs / managers and human developers or even completely LLM'd will outperform human-led / managed companies. The capital class will jeer about that for a while, but the cost for tokens will continue to drop to near zero. At that point, they're out of leverage too. | | |
| ▲ | johnfn 15 minutes ago | parent | next [-] | | Your proof-in-pudding test seems to assume that AI is binary -- either it accelerates everyone's development 100x ("let's rewrite every app into bug-free native applications") or nothing ("there hasn't been anything to show for that in years"). I posit reality is somewhere in between the two. | |
| ▲ | MidnightRider39 an hour ago | parent | prev | next [-] | | Leadership is also a very human thing. I think most people would balk at the idea of being led by an LLM. One of the main functions of leaders (should be) is to assume responsibility for decisions and outcomes. A computer cant do that. And finally why should someone in power choose to replace themselves? | |
| ▲ | shard972 2 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | bloaf 3 hours ago | parent | prev | next [-] | | I remember a study from a while back that found something like "50% of 2nd graders think that french fries are made out of meat instead of potatoes. Methodology: we asked kids if french fries were meat or potatoes." Everyone was going around acting like this meant 50% of 2nd graders were stupid with terrible parents. (Or, conversely, that 50% of 2nd graders were geniuses for "knowing" it was potatoes at all) But I think that was the wrong conclusion. The right conclusion was that all the kids guessed and they had a 50% chance of getting it right. And I think there is probably an element of this going on with the small models vs big models dichotomy. | | |
| ▲ | Kye 2 hours ago | parent [-] | | I think it also points to the problem of implicit assumptions. Fish is meat, right? Except for historical reasons, the grocery store's marketing says "Fish & Meat." And then there's nut meats. Coconut meat. All the kinds of meat from before meat meant the stuff in animals. The meat of the problem. Meat and potatoes issues. If you asked that question before I'd picked up those implicit assumptions, or if I never did, I would have to guess. |
| |
| ▲ | argee 5 hours ago | parent | prev [-] | | With LLMs (and colleagues) it might be a legitimate problem since they would load that eval into context and maybe decide it’s an acceptable paradigm in your codebase. |
| |
| ▲ | idopmstuff 5 hours ago | parent | prev [-] | | > 'Or none' is ruled out since it found the same vulnerability It's not, though. It wasn't asked to find vulnerabilities over 10,000 files - it was asked to find a vulnerability in the one particular place in which the researchers knew there was a vulnerability. That's not proof that it would have found the vulnerability if it had been given a much larger surface area to search. | | |
| ▲ | sweezyjeezy 4 hours ago | parent [-] | | I don't think the LLM was asked to check 10,000 files given these models' context windows. I suspect they went file by file too. That's kind of the point - I think there's three scenarios here a) this just the first time an LLM has done such a thorough minesweeping
b) previous versions of Claude did not detect this bug (seems the least likely)
c) Anthropic have done this several times, but the false positive rate was so high that they never checked it properly Between a) and c) I don't have a high confidence either way to be honest. |
|
|
|
| ▲ | mnicky 6 hours ago | parent | prev | next [-] |
| Also, what is $20,000 today can be $2000 next year. Or $20... See e.g. https://epoch.ai/data-insights/llm-inference-price-trends/ |
| |
| ▲ | sumeno 5 hours ago | parent [-] | | Or $200,000 for consumers when they have to make a profit | | |
| ▲ | philipallstar 4 hours ago | parent [-] | | Good point. This is why consumer phones have got much worse since 2005 and now cost millions of dollars. | | |
| ▲ | thmoonbus 3 hours ago | parent | next [-] | | Now do uber rides | |
| ▲ | ijk 3 hours ago | parent | prev [-] | | With the way the chip shortage the way it is, I'm a little concerned that my next phone will be worse and more expensive... |
|
|
|
|
| ▲ | integralid 7 hours ago | parent | prev | next [-] |
| >Or none We already know this is not true, because small models found the same vulnerability. |
| |
| ▲ | tptacek 6 hours ago | parent | next [-] | | No, they didn't. They distinguished it, when presented with it. Wildly different problem. | | |
| ▲ | enraged_camel 6 hours ago | parent [-] | | Yeah. And it is totally depressing that this article got voted to the top of the front page. It means people aren’t capable of this most basic reasoning so they jumped on the “aha! so the mythos announcement was just marketing!!” | | |
| |
| ▲ | BoiledCabbage 6 hours ago | parent | prev | next [-] | | > because small models found the same vulnerability. With a ton of extra support. Note this key passage: >We isolated the vulnerable svc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities. Yeah it can find a needle in a haystack without false positives, if you first find the needle yourself, tell it exactly where to look, explain all of the context around it, remove most of the hay and then ask it if there is a needle there. It's good for them to continue showing ways that small models can play in this space, but in my read their post is fairly disingenuous in saying they are comparable to what Mythos did. I mean this is the start of their prompt, followed by only 27 lines of the actual function: > You are reviewing the following function from FreeBSD's kernel RPC subsystem (sys/rpc/rpcsec_gss/svc_rpcsec_gss.c). This function is called when the NFS server receives an RPCSEC_GSS authenticated RPC request over the network. The msg structure contains fields parsed from the incoming network packet. The oa_length and oa_base fields come from the RPC credential in the packet. MAX_AUTH_BYTES is defined as 400 elsewhere in the RPC layer. The original function is 60 lines long, they ripped out half of the function in that prompt, including additional variables presumably so that the small model wouldn't get confused / distracted by them. You can't really do anything more to force the issue except maybe include in the prompt the type of vuln to look for! It's great they they are trying to push small models, but this write up really is just borderline fake. Maybe it would actually succeed, but we won't know from that. Re-run the test and ask it to find a needle without removing almost all of the hay, then pointing directly at the needle and giving it a bunch of hints. The prompt they used: https://github.com/stanislavfort/mythos-jagged-frontier/blob... Compare it to the actual function that's twice as long. | | |
| ▲ | apgwoz 5 hours ago | parent [-] | | The benefit here is reducing the time to find vulnerabilities; faster than humans, right? So if you can rig a harness for each function in the system, by first finding where it’s used, its expected input, etc, and doing that for all functions, does it discover vulnerabilities faster than humans? Doesn’t matter that they isolated one thing. It matters that the context they provided was discoverable by the model. | | |
| ▲ | woeirua 4 hours ago | parent [-] | | There is absolutely zero reason to believe you could use this same approach to find and exploit vulns without Mythos finding them first. We already know that older LLMs can’t do what Mythos has done. Anthropic and others have been trying for years. | | |
| ▲ | apgwoz 31 minutes ago | parent | next [-] | | Why? They claim this small model found a bug given some context. I assume the context wasn’t “hey! There’s a very specific type of bug sitting in this function when certain conditions are met.” We keep assuming that the models need to get bigger and better, and the reality is we’ve not exhausted the ways in which to use the smaller models. It’s like the Playstation 2 games that came out 10 years later. Well now all the tricks were found, and everything improved. | |
| ▲ | nozzlegear 3 hours ago | parent | prev [-] | | > There is absolutely zero reason to believe you could use this same approach to find and exploit vulns without Mythos finding them first. There's one huge reason to believe it: we can actually use small models, but we cant use Anthropic's special marketing model that's too dangerous for mere mortals. | | |
| ▲ | Filligree 27 minutes ago | parent [-] | | If all you have is a spade, that is _not_ evidence that spades are good for excavating an entire hill. |
|
|
|
| |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | sandeepkd 3 hours ago | parent | prev | next [-] |
| The security researcher is charging the premium for all the efforts they put into learning the domain. In this case however, things are being over simplified, only compute costs are being shared which is probably not the full invoice one will receive. The training costs, investments need to be recovered along with the salaries. Machines being faster, more accurate is the differentiating factor once the context is well understand |
|
| ▲ | john_minsk 5 hours ago | parent | prev | next [-] |
| In the future there shouldn't be any bugs. I'm not paying $20 per month to get non-secure code base from AGI. |
|
| ▲ | SpicyLemonZest 7 hours ago | parent | prev | next [-] |
| What the source article claims is that small models are not uniformly worse at this, and in fact they might be better at certain classes of false positive exclusion. This is what Test 1 seems to show. (I would emphasize that the article doesn't claim and I don't believe that this proves Mythos is "fake" or doesn't matter.) |
|
| ▲ | ALittleLight 4 hours ago | parent | prev | next [-] |
| 3 years ago the best model was DaVinci. It cost 3 cents per 1k tokens (in and out the same price). Today, GPT-5.4 Nano is much better than DaVinci was and it costs 0.02 cents in and .125 cents out per 1k tokens. In other words, a significantly better model is also 1-2 orders of magnitude cheaper. You can cut it in half by doing batch. You could cut it another order of magnitude by running something like Gemma 4 on cloud hardware, or even more on local hardware. If this trend continues another 3 years, what costs 20k today might cost $100. |
| |
|
| ▲ | siva7 6 hours ago | parent | prev | next [-] |
| Except you would need about 10,000 security researches in parallel to inspect the whole FreeBSD codebase. So about 200 million dollars at least. |
|
| ▲ | amazingamazing 7 hours ago | parent | prev | next [-] |
| Citation needed for basically all of this. You basically are creating a double standard for small models vs mythos… |
| |
| ▲ | johnfn 6 hours ago | parent [-] | | The citation is the Anthropic writeup. | | |
| ▲ | amazingamazing 5 hours ago | parent [-] | | They did not say what you are saying… > If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. | | |
| ▲ | johnfn 3 hours ago | parent [-] | | What I am saying is that the approach the Anthropic writeup took and the approach Aisle took are very different. The Aisle approach is vastly easier on the LLM. I don't think I need a citation for that. You can just read both writeups. The "9500" quote is my conjecture of what might happen if they fix their approach, but the burden of proof is definitely not on me to actually fix their writeup and spend a bunch of money to run a new eval! They are the ones making a claim on shaky ground, not me. |
|
|
|
|
| ▲ | youre-wrong3 5 hours ago | parent | prev [-] |
| [dead] |