| ▲ | crazygringo 3 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
> “We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said. I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day? I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance? Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | chromacity 2 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day? Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it. So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | DrewADesign 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Well, it’s great marketing for LLM products at the enterprise level. Even if they weren’t sure, they have every incentive to run with it now, and the issue a “whoopsie daisy” apology later after the tech media stopped paying attention. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | _alternator_ 19 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | glenstein 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
The article says it included excessive explainer text. And I'm almost positive an earlier version of the article referenced hallucinated library references though I don't see it in the present version of the article. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | eatsyourtacos 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Maybe after they realized how they were vulnerable they asked an LLM to find the exploit through a similar means to try and replicate it. Still doesn't prove it but maybe gives them confidence this weird thing can only really be found that way etc. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | slater 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day? Excessive use of em-dashes, and emoji bullet points in the readme | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | yacthing 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Maybe they saw traffic that looked like AI prodding an API and quickly adapting to find the bug? But at this point I feel like odds are everyone looking for vulnerabilities is using AI to some extent. Why wouldn't they? It'd be stranger if they didn't. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | nullc 3 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Presumably the attacker used Google's own LLM and they searched the history of all user chats to find the transcript. I say this only slightly in jest, as that's about the only thing I can think of which would legitimately give them 'high confidence'. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||