Remix.run Logo
rikafurude21 7 hours ago

This feels more like an old problem getting reframed as an AI problem.

people were already diffing kernel commits and figuring out which ones were security fixes long before llms. if a patch lands publicly, the race has basically already started.

also not sure shorter embargoes really help. the orgs that can patch in hours are already fine. everyone else still takes days or weeks.

if anything, cheaper exploit generation probably makes coordinated disclosure more important, not less.

JumpCrisscross 7 hours ago | parent | next [-]

> people were already diffing kernel commits and figuring out which ones were security fixes

With skill, and usually not consistently and systematically. With AI, anyone can do this to any software.

> not sure shorter embargoes really help

Why 90 days versus 2 years? The author is arguing the factors that set that balance have shifted, given the frequency of simultaneous discovery. The embargo window isn’t an actual window, just an illusion, if the exploit is going to be found by several people outside the embargo anyway.

> cheaper exploit generation probably makes coordinated disclosure more important

I agree. But it also makes it less viable. If script kiddies can find and exploit zero days, the capacity to co-ordinate breaks down.

There was always a guild ethic that drove white-hate (EDIT: hat) culture. If the guild is broken, the ethic has nothing to stand on.

Hizonner 6 hours ago | parent | next [-]

> With skill, and usually not consistently and systematically.

How do you know? If the people who like to crow about vulnerabilities aren't doing it, it doesn't mean that the people who are actually in a position to exploit them systematically and effectively aren't doing it.

Those embargoes have always been dangerous, because they create a false sense of security. But, as you point out...

> With AI, anyone can do this to any software.

Yep. Even if it hadn't been true before, it's clear that now you just have to assume that everybody relevant will immediately recognize the security impact of any patch that gets published. That includes both bugs fixed and bugs introduced.

... and as the AI gets better, you're going to have to assume that you don't even have to publish a patch. Or source code. Within way less time than it's going to take people to admit it and adjust, any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.

thereisnospork 6 hours ago | parent | next [-]

>any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.

Shouldn't this naturally lead to a state where all (new) code is vulnerability-free? If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code.

organsnyder 6 hours ago | parent | next [-]

Finding a vulnerability by looking at the diff that fixed it is very different than just looking through the code.

Izkata an hour ago | parent [-]

They're saying to do that scan to every diff before release, to see if it finds anything.

riknos314 30 minutes ago | parent [-]

I believe their point was that:

"How likely is this diff a patch for an existing vulnerability?"

Seems to be an easier question to answer than

"Are there any new vulnerabilities introduced by this diff?"

In other words identifying that a patch is for a vulnerability is typically easier than finding the vulnerability in the first place.

Hizonner 6 hours ago | parent | prev [-]

> it'll become common/forced practice to pre-scan code.

You'd think.

But then you'd think people would do a lot of other things too. I hope, I guess.

The other danger is that "the cloud" may become even more overwhelmingly dominant. Which of course has its own large security costs.

ragall 6 hours ago | parent | prev [-]

> How do you know?

We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.

awesome_dude 6 hours ago | parent [-]

That's correlation, not causation.

It could equally be argued that the AI slop that's being produced makes for a lot more vulnerabilities being shipped. The bigger target makes for the easier discovery.

tempestn 6 hours ago | parent | next [-]

But don't we know that some of the vulnerabilities being discovered predate ai coding?

awesome_dude 5 hours ago | parent [-]

Certainly, and some discoveries have been attributed to AI (I was reading that mozilla firefox were praising mythos recently)

But that's not accounting for all of the discoveries, not at all.

I've also seen the npm people talking about the surge in AI code overwhelming the ability to properly review what's being distributed, and a large number of vulnerabilities being attributed to that

jefftk 2 hours ago | parent | prev | next [-]

It's likely varies enormously between projects. Linux remains extremely low in slop, and the vulnerabilities being fixed are quite old, so it's improving. Many vibe coded projects are very sloppy, and are adding a lot of vulnerabilities.

Total number of vulnerabilities likely goes up over time weighting all projects equally, but goes down over time weighting by usage.

awesome_dude 2 hours ago | parent [-]

I mean - you're spot on - which is why I'd be more inclined to ask for actual metrics rather than feels/vibes, and I'd be very clear that the information I was basing my thinking on has enormous pitfalls.

This is the basis for "correlation points to possibly fertile grounds for an investigation"

ragall 6 hours ago | parent | prev [-]

> That's correlation, not causation.

Pragmatically, correlation *is* evidence of causation in favour of the best explanation, until somebody finds a better explanation.

> It could equally be argued that the AI slop that's being produced makes for a lot more vulnerabilities being shipped.

This is also true, and does not exclude the other, because for the moment the vast majority of production software in the world (and therefore the bulk of enticing targets) was written before AI. If LLM software will become prevalent in commercial setups, then LLM-generated code will eventually become the majority of targets.

awesome_dude 6 hours ago | parent [-]

> Pragmatically, correlation is evidence of causation in favour of the best explanation, until somebody finds a better explanation.

Uh, no.

Correlation is only ever one thing - cause for investigation.

Everything based on correlation alone is speculation.

You can speculate all you like, I have zero issue with that, but that's best prefaced with "I guess"

edit: Science captures this perfectly, and people misunderstand this so fundamentally that there is a massive debate where people who think they are "pro science" argue this so badly with theists that they completely hoist themselves with their own petard.

Science uses the term "theory" because all of our understanding is based on "available data" - and science biggest contribution to humanity is that it accepts that the current/leading THEORY can and will be retracted if there is compelling data discovered that demonstrates a falsehood.

So - because I know this is coming - yes science is willing to accept some correlation - BUT it's labelled "theory" or "statistically significant" because science is clear that if other data arises then that idea will need to be revisited.

ragall 6 hours ago | parent [-]

Very often you only have limited time for investigation and you have to act now. Action is almost always based on educated guesses.

awesome_dude 5 hours ago | parent [-]

You have moved from "We know" to "We have an educated guess" which is the right way to couch things.

However I wanted to also point out that relying only on educated guesses can lead us into a position where we are "papering over the cracks" or "addressing the symptoms", not the "underlying cause"

Yes, sometimes that's all that can be done, but, also, sometimes it can be more damaging than the cause itself (thinking in terms of the cause continuing to fester away, whilst we think it's 'solved')

ragall 4 hours ago | parent [-]

> You have moved from "We know" to "We have an educated guess"

No. You kept blabbering about "science" when most uses of knowledge are not about science. The original topic was also definitely not "science": it was about having a reasonable opinion about whether, empirically, the rate of discovery of vulnerabilities is increasing or not.

awesome_dude 3 hours ago | parent [-]

Trying to reframe this as 'not science' after being caught on a logical fallacy doesn't change the record. You started with a definitive claim ('We know') to shut down a question. When challenged on the lack of causation, you pivoted to 'educated guesses.'

My point remains: if we misattribute the cause of the rising vulnerability rate (discovery vs. creation), our 'educated guesses' will lead to solutions that address the symptoms while the underlying problem continues to fester. Calling precision 'blabbering' is exactly how we end up with the 'false sense of security' mentioned earlier.

Exhibit A:

ragall 2 hours ago | root | parent | prev | next [–]

> How do you know?

We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.

Exhibit B:

ragall 2 hours ago | root | parent | next [–]

Very often you only have limited time for investigation and you have to act now. Action is almost always based on educated guesses. reply

awesome_dude 6 hours ago | parent | prev | next [-]

> people were already diffing kernel commits and figuring out which ones were security fixes With skill, and usually not consistently and systematically. With AI, anyone can do this to any software.

I would like to see actual evidence of this, not.. vibes

I mean, this reeks of "Anyone is a Principal developer now" when the truth is there is still work to do.

totetsu 6 hours ago | parent | prev | next [-]

“White-Hat”

gritspants 6 hours ago | parent | prev [-]

I'm here for white-hate culture. You should, you should know better.

lynndotpy 5 hours ago | parent | prev | next [-]

I haven't been keeping tabs for the entirety of Linux development, but has it ever happened before that someone dropped a working exploit from the mailing list before the patch even hit the kernel?

I haven't seen this kind of thing and I get the impression, despite all the hype, that this will be a frequent phenomenon now thanks to LLMs.

alecco 6 hours ago | parent | prev | next [-]

> Torvalds said that disclosing the bug itself was enough, without the pursuant circus that followed when a major problem has been discovered. [1]

So it's not surprising Dirtyfrag was disclosed by a fix in the Linux kernel. [2]

[1] https://www.zdnet.com/article/torvalds-criticises-the-securi...

[2] https://afflicted.sh/blog/posts/copy-fail-2.html

santoshalper 6 hours ago | parent | prev | next [-]

I'd say it's an old problem be exacerbated by AI.

Forgeties79 5 hours ago | parent | prev | next [-]

I find i’m writing variations of the same comment every week so I’m just going to share a previous version I wrote if you’ll permit the laziness:

https://news.ycombinator.com/item?id=47921829

fragmede 5 hours ago | parent | prev [-]

Reminder: the Ksplice patent expires October 1, 2028.

manquer an hour ago | parent | next [-]

I don't think hot patching holds the same relevance it did in 2010.

Much of today's workloads are containerized and run on roughly ephemeral nodes that can be switched out easily- K8s version upgrades force this more or less. We tent to run more and more of-the shelf hardware and worry less about individual node failures now.

In-memory updates also not magic , and can be limited as they requires data structure semantics to not really change and can create its own class of issues/bugs including security ones.

While am sure there are still use cases which dictate this type of update, the need is lot less than 15 years ago that the patent expiry will do much to the ecosystem.

whattheheckheck 4 hours ago | parent | prev [-]

What's the implications to that

fragmede 3 hours ago | parent [-]

Means you wouldn't have to reboot to patch for security updates to the Linux kernel. Assuming someone does something with that.