| ▲ | _se 6 hours ago | |||||||||||||||||||||||||||||||
It can be correct and slop at the same time. The reporter could have reported it in a way that makes it clear a human reviewed and cared about the report. Slop is a function of how the information is presented and how the tools are used. People don't care if you use LLMs if they don't tell you can use them, they care when you send them a bunch of bullshit with 5% of value buried inside it. If you're reading something and you can tell an LLM wrote it, you should be upset. It means the author doesn't give a fuck. | ||||||||||||||||||||||||||||||||
| ▲ | tptacek 4 hours ago | parent [-] | |||||||||||||||||||||||||||||||
No it can't. These aren't "Show HN" posts about new programs people have conjured with Claude. They're either vulnerabilities or they're not. There's no such thing as a "slop vulnerability". The people who exploit those vulnerabilities do not care how much earlier reporters "gave a fuck" about their report. This is in the linked story: they're seeing increased numbers of duplicate findings, meaning, whatever valid bugs showboating LLM-enabled Good Samaritans are finding, quiet LLM-enabled attackers are also finding. People doing software security are going to need to get over the LLM agent snootiness real quick. Everyone else can keep being snooty! But not here. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||