Remix.run Logo
CrzyLngPwd 5 hours ago

People used LLMs to find flaws in Google software.

adrianmonk 3 hours ago | parent | next [-]

If you're talking about the incident described in the article, it says it was a flaw in "a popular open-source, web-based system administration tool".

Google's blog (https://cloud.google.com/blog/topics/threat-intelligence/ai-...) says Google "worked with the impacted vendor to responsibly disclose this vulnerability", so in this incident, it's not Google software.

amelius 5 hours ago | parent | prev [-]

But did they use Gemini?

Andrex 4 hours ago | parent | next [-]

> the company added that it did not believe it was its own Gemini chatbot.

-TFA

freedomben 4 hours ago | parent | prev [-]

I don't know, but given how often Gemini refuses benign requests IME, I would suspect it's a complete non-starter for finding security holes.