Remix.run Logo
f311a 4 hours ago

How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.

The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.