Remix.run Logo
toomuchtodo 4 hours ago

Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts.

I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.

(my experience is roughly a decade in cybersecurity and risk management, ymmv)

anonymous908213 4 hours ago | parent [-]

The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X. No Y. No Z." and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles.

Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.

p0w3n3d 3 hours ago | parent [-]

I wonder if there's any probabilistic analyser that could confirm that the article is generated, or show which parts might have been generated

roywiggins 3 hours ago | parent [-]

Pangram[0] thinks the closing part is AI generated but the opening paragraphs are human. Certainly the closing paragraphs have a bit of an LLM flavor (a header titled "The Pattern", eg)

[0] https://www.pangram.com

anonymous908213 3 hours ago | parent [-]

There are no automated AI detectors that work. False positives and false negatives are both common, and the false positives particularly render them incredibly dangerous to use. Just like LLMs have not actually replaced competent engineers working on real software despite all the hysteria about them doing so, they also can't automate detection, and it is possible to build up stronger heuristics as a human. I am fully confident and would place a large sum of money on this article being LLM-generated if we could verify the bet, but we can't, so you'll just have to take my word for it, or not.