| ▲ | bch 10 hours ago | |
> This is expected in the normal population A lot of people regardless of technical ability have strong opinions about what LLMs are/are-not. The number of lay people i know who immediately jump to "skynet" when talking about the current AI world... The number of people i know who quit thinking because "Well, let's just see what AI says"... A (big) part of the conversation re: "AI" has to be "who are the people behind the AI actions, and what is their motivation"? Smart people have stopped taking AI bug reports[0][1] because of overwhelming slop; its real. [0] https://www.theregister.com/2025/05/07/curl_ai_bug_reports/ [1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d... | ||
| ▲ | LeFantome 8 hours ago | parent [-] | |
The fact that most AI bug reports are low-quality noise says as much or more about the humans submitting them than it does about the state of AI. As others have said, there are multiple stages to bug reports and CVEs. 1. Discover the bug 2. Verify the bug You get the most false positives at step one. Most of these will be eliminated at step 2. 3. Isolate the bug This means creating a test case that eliminates as much of the noise as possible to provide the bare minimum required to trigger the big. This will greatly aid in debugging. Doing step 2 again is implied. 4. Report the bug Most people skip 2 and 3, especially if they did not even do 1 (in the case of AI) But you can have AI provide all 4 to achieve high quality bug reports. In the case of a CVE, you have a step 5. 5 - Exploit the bug But you do not have to do step 5 to get to step 2. And that is the step that eliminates most of the noise. | ||