| ▲ | gamegoblin 4 hours ago | |
AI detectors in general are unreliable, but there are a few made by serious researchers that have only 1-in-10000 false positive rate, e.g. https://arxiv.org/pdf/2402.14873 Having worked in a bigcorp, I've read my fair share of management-speak, and none of it sounds quite as empty as the allegedly AI text. The AI sounds like someone conjuring a parody emulation of management speak instead of actual management speak. More broadly — and I feel this way about AI code at well as AI prose — I find that part of my brain is always trying to reverse engineer what kind of person wrote this, what was their mental state when writing it? And when reading AI code or AI prose, this part of my brain short circuits a little. Because there is no cohesive human mind behind the text. It's kind of like how you subconsciously learn to detect emotion in tiny facial movements, you also subconsciously learn to reverse engineer someone's mind state from their writing. Reading AI writing feels like watching an alien in skinsuit try to emulate human face emotional cues — it's just not quite right in a hard-to-describe-but-easy-to-detect way. | ||
| ▲ | dbtablesorrows 2 hours ago | parent [-] | |
> And when reading AI code or AI prose, this part of my brain short circuits a little. Because there is no cohesive human mind behind the text. This is most succinct description of my brain's slop detection algorithm. | ||