Remix.run Logo
crazygringo 3 hours ago

> “We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said.

I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?

Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.

chromacity 2 hours ago | parent | next [-]

> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it.

So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it.

Hupriene an hour ago | parent [-]

None of what you've said is untrue. And if this was an internal report to an executive, I'd agree with it. But this is a public statement and I'm more inclined to believe that this is part of a coordinated run up to a move to ban the import of 'dangerous' Chinese AI models -- or something else equally self serving -- than a simple statement of truth.

I don't doubt that they found some evidence of AI use. I'm just skeptical that the amount and strength of evidence has anything to do with their making this statement.

I've been thinking about why the AI companies are making so much use of fear based marketing. And I'm wonder if it isn't just naked Machiavellianism at work.

For a long time tech companies were forced to compete for power by being the most loved (or at least not the most hated). But now they've found an avenue to cultivate fear.

DrewADesign 2 hours ago | parent | prev | next [-]

Well, it’s great marketing for LLM products at the enterprise level. Even if they weren’t sure, they have every incentive to run with it now, and the issue a “whoopsie daisy” apology later after the tech media stopped paying attention.

dragonelite an hour ago | parent [-]

This is why i can't wait for a new AI winter or atleast a fall(the bubble deflating slowly). Just like you can now really see how useful web3 and NFT really are...

_alternator_ 19 minutes ago | parent | prev | next [-]

The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.

glenstein 2 hours ago | parent | prev | next [-]

The article says it included excessive explainer text. And I'm almost positive an earlier version of the article referenced hallucinated library references though I don't see it in the present version of the article.

eatsyourtacos 3 hours ago | parent | prev | next [-]

Maybe after they realized how they were vulnerable they asked an LLM to find the exploit through a similar means to try and replicate it. Still doesn't prove it but maybe gives them confidence this weird thing can only really be found that way etc.

slater 2 hours ago | parent | prev | next [-]

> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

Excessive use of em-dashes, and emoji bullet points in the readme

yacthing 2 hours ago | parent | prev | next [-]

Maybe they saw traffic that looked like AI prodding an API and quickly adapting to find the bug?

But at this point I feel like odds are everyone looking for vulnerabilities is using AI to some extent. Why wouldn't they? It'd be stranger if they didn't.

ai_fry_ur_brain 2 hours ago | parent [-]

Because we dont want to fry our brains by using this junk.

nullc 3 hours ago | parent | prev [-]

Presumably the attacker used Google's own LLM and they searched the history of all user chats to find the transcript.

I say this only slightly in jest, as that's about the only thing I can think of which would legitimately give them 'high confidence'.

djeastm 3 hours ago | parent [-]

In the article (AP one, at least) Google explicitly said it does not believe it was Gemini or Mythos.

bmelton 3 hours ago | parent [-]

Clearly that's because they searched the history of all chats and didn't find the perpetrator

HDBaseT 2 hours ago | parent | next [-]

I know we're talking about Google here, but the privacy violations and concerns from this sort of search are massive.

We need local AI ASAP.

gchamonlive 2 hours ago | parent | next [-]

Don't get me wrong, I'm with you here, but we are back to the days when we had to rent mainframe time for compiling programs. Not because of software limitations, but you just didn't have consumer grade hardware capable of running them.

This time, however it's even worse, because it'll be a really long time until either we get consumer GPUs with enough VRAM for full models or LLMs that fit in 16-32GB capable enough to compete with cloud providers.

I run locally qwen3.6 27b on my 3090 and it's really impressive for what it is, but it is still generations away from being capable of delivering a level of quality that we can confidently default to solo drive them on a daily basis.

overfeed an hour ago | parent | prev [-]

> We need local AI ASAP.

That is an excellent idea, once we, the GPU-poor mice, figure out who is going to bell the SoTA training cat. Chinese models being banned is well within the realms of lobbied possibilities.

BobbyTables2 2 hours ago | parent | prev [-]

They probably used AI for the search.

The real game would be to put a “nothing of interest here” prompt injection attack in the original series of prompts so a LLM parsing them later would ignore the attackers’ session.