| ▲ | lucumo 3 hours ago | |||||||||||||
Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not. This is a problem, because you can easily get stuck in a self-reinforcing loop. You feel strengthened in your convictions that you're good at ferreting out LLM-speak because you've found so much of it. And you find so much of it because you feel confident you're good at it. Nobody ever corrects you when you're wrong. Combine that with general overconfidence and you get threads where every other post with correct grammar gets "called out" as AI generated. It's pretty boring. There's a similar effect with contentious subject. You get reams and reams of posts calling the other side out for being part of a Russian/Israeli/Iranian/Chinese troll network. There's no independent falsification or verification for that, so people just get strengthened in their existing beliefs. | ||||||||||||||
| ▲ | grey-area an hour ago | parent [-] | |||||||||||||
At this point it’s pretty easy to detect unaltered LLM output because it is such bad writing. That will change over time with training I would hope. At some point I imagine it will be hard to tell. I honestly don’t know what sites like this will do when that happens and the only way of detecting LLMs is that they are subtly wrong or post too much, we’d be overrun with them. Not sure if we should be hopefully or fearful that they will improve to be undetectable but I suspect they will. | ||||||||||||||
| ||||||||||||||