| ▲ | toomuchtodo 4 hours ago | |||||||||||||||||||||||||
Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts. I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up. (my experience is roughly a decade in cybersecurity and risk management, ymmv) | ||||||||||||||||||||||||||
| ▲ | anonymous908213 4 hours ago | parent [-] | |||||||||||||||||||||||||
The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X. No Y. No Z." and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles. Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||