| ▲ | rishabhaiover 6 hours ago |
| What is happening? I see multiple outages and CVEs is being reported on HN's front page. I've never seen these many security/incident related posts on HN's front page. |
|
| ▲ | spindump8930 5 hours ago | parent | next [-] |
| Some combination of reporting bias given concerns about LLM security capabilities and actual new vulnerabilities found with LLM assistance. Even if exploits and outages are unrelated to LLMs, I'm certainly thinking about whether claude could build these things (or if actors already have). |
|
| ▲ | NitpickLawyer 5 hours ago | parent | prev | next [-] |
| > What is happening? Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security. |
|
| ▲ | elija 24 minutes ago | parent | prev | next [-] |
| In some sense, I wonder if non-open-source is "safer" since LLMs can't mass scan the code for exploits. |
|
| ▲ | john_strinlai 5 hours ago | parent | prev | next [-] |
| i believe a good portion of the cves hitting the front page are moreso because they are ai-related (found partially/in whole by ai) and make for quick upvotes. |
|
| ▲ | majorchord 5 hours ago | parent | prev | next [-] |
| AI is happening. |
| |
| ▲ | cachius 5 hours ago | parent [-] | | In each recent case? | | |
| ▲ | gordonhart 5 hours ago | parent [-] | | AI assistance was explicitly disclosed on yesterday's. Today's has Claude as one of two contributors on this GitHub Pages site at least so it's also very likely. Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit. |
|
|
|
| ▲ | sva_ 2 hours ago | parent | prev | next [-] |
| A mix of AI and hybrid warfare. |
|
| ▲ | gilrain 5 hours ago | parent | prev | next [-] |
| Automated vulnerability discovery via LLM. |
| |
| ▲ | ryandrake 3 hours ago | parent | next [-] | | Anyone care to share which models and which prompts actually lead to finding these kinds of vulnerabilities? Or the narrowing-down workflow that can get an LLM to discover them? Surely just telling claude "Find all vulnerabilities in this project LOL" isn't enough? I hope? | | |
| ▲ | Arcuru 2 hours ago | parent | next [-] | | The Anthropic researchers have said their flow is as simple as: 1. Pick a file to seed as a starting place. 2. Ask the LLM (in an agent harness) to find a vulnerability by starting there. 3. If it claims to have found something, ask another one to create an exploit/verify it/prove it or whatever. 4. If both conclude there is a vuln, then with the latest models you almost certainly found something real. Just run it against every file in a repo, or select a subset, or have an LLM select files with a simple "what X files look likely to have vulns?". So basically yes, it is that simple. It's just a matter of having the money to pay for the tokens. | | | |
| ▲ | huflungdung 2 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | pixl97 5 hours ago | parent | prev [-] | | Everyone was talking about how Mythos was overblown marketing, and while it may be, they missed the forest for the trees. Capabilities have been escalating for a year now and we're at the point of widespread impact. I don't suspect we'll see a slowdown for a long time. |
|
|
| ▲ | themafia 3 hours ago | parent | prev [-] |
| Perhaps it was the prior quiescent period that was the anomaly. |