Remix.run Logo
revolvingthrow 6 hours ago

Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?

Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on

PopGuessr 3 minutes ago | parent | next [-]

The Mythos announcement was crazy I think "...has already found _thousands_ of severe security vulnerabilities across _all_ OSes"!

jcims 4 hours ago | parent | prev | next [-]

I just did some analysis on this last weekend, in 2024 there were roughly 100 CVEs published every day. In April we hit approximately 200 per day.

Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.

There has definitely been a rapid uptick.

Vexs 4 hours ago | parent | next [-]

Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.

carlmr 4 hours ago | parent | next [-]

Also the question remains if more CVE laden code was produced in the first place, instead of automated detection improvements.

It's easier to find a needle in the haystack if the haystack is 50% needles.

red-iron-pine an hour ago | parent [-]

have the AI vibe code crappy apps so the related AI vuln finder can fix them

just doubled the value and use cases of your AI solution!

om42 4 hours ago | parent | prev [-]

Another reason published CVEs isn't a great metric is that one of the largest contributors to the number of CVEs significantly increasing in the past couple years has been that the Linux kernel now submits almost all bugs as CVEs which wasn't the case before.

adikso 4 hours ago | parent | prev | next [-]

I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.

There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.

ainch 3 hours ago | parent | prev | next [-]

Did you publish this anywhere? Would love to read more.

Seattle3503 4 hours ago | parent | prev [-]

The rules around CVE reporting changed recently and it would be expected a lot more are accepted.

ftqalj 4 hours ago | parent | prev | next [-]

If one reads between the lines in part 1, the code in question was introduced due to AI features and the exploit was found by humans:

https://projectzero.google/2026/01/pixel-0-click-part-1.html

So AI usage increases bugs and humans have to weed them out!

4 hours ago | parent [-]
[deleted]
rcxdude 6 hours ago | parent | prev | next [-]

There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.

deaton 5 hours ago | parent | prev | next [-]

This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.

seanieb 5 hours ago | parent [-]

Not low quality if it works!

jayd16 5 hours ago | parent | next [-]

The low quality refers to the features with security holes. So no, it didn't work (in this hypothetical).

recursive 3 hours ago | parent | prev | next [-]

But it is low quality if it's vulnerable to exploits. And if that's the case, I wouldn't say it really "works".

red-iron-pine an hour ago | parent | prev [-]

only until it's ransomware'd

imenani 6 hours ago | parent | prev | next [-]

https://lwn.net/Articles/1065620/

bbayles 6 hours ago | parent | prev | next [-]

I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.

krupan 4 hours ago | parent | prev | next [-]

There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.

This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.

Aachen 5 hours ago | parent | prev | next [-]

A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise

I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well

worldsavior 6 hours ago | parent | prev | next [-]

I think AI helped researchers navigate better in the codebase, not necessarily the AI is succeeding in exploiting.

aiscoming 5 hours ago | parent [-]

[dead]

aiscoming 5 hours ago | parent | prev [-]

[dead]