| ▲ | Der_Einzige 10 hours ago | |
I don't need to "prove it", because all I have to do is link this: https://arxiv.org/abs/2409.01754 https://arxiv.org/abs/2508.01491 https://aclanthology.org/2025.acl-short.47/ https://arxiv.org/abs/2506.06166 https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing https://osf.io/preprints/psyarxiv/wzveh_v1 https://arxiv.org/abs/2506.08872 https://aclanthology.org/2025.findings-acl.987/ https://aclanthology.org/2025.coling-main.426/ https://aclanthology.org/2025.iwsds-1.37/ https://www.medrxiv.org/content/10.1101/2024.05.14.24307373v... https://journals.sagepub.com/doi/full/10.1177/21522715251379... https://arxiv.org/abs/2506.21817 Either they used an LLM to write part of it, or the linguistic mind virus infected them and now they speak a little bit like an LLM. | ||
| ▲ | myrmidon 10 hours ago | parent [-] | |
Relevant excerpt from your own wiki guideline: "Do not rely too much on your own judgment. [...] if you are an expert user of LLMs and you tag 10 pages as being AI-generated, you've probably falsely accused one editor." Never accuse people of LLM writing based on short comments, your false positive rate is invariably going to be way too high to be acceptable given the very limited material. It's just not worth it: Even if you correctly accuse 9/10 times, you are being toxic to that false positive case for basically no gain. | ||