▲ | zahlman 2 days ago | |
If I'm being entirely honest, in the general case I don't. But I don't particularly care, either. After a couple tries I decided it's better not to point at object examples of suspected LLM text all the time (except e.g. to report it on Stack Overflow, where it's against the rules and where moderators will use actual detection software etc. to try to verify). But I still notice that style of writing instinctively, and it still automatically flips a switch in my brain to approach the content differently. (Of course, even when I'm confident that something was written by a human, I still e.g. try to verify terminal commands with the man pages before following instructions I don't understand.) Of course, AI writes the way it does for a reason. More worryingly, it increasingly seems like (verifiably) human writers are mimicking the style - like they see so much AI-generated text out there that sounds authoritative, that they start trying to use the same rhetorical techniques in order to gain that same air of authority. | ||
▲ | buttercraft 2 days ago | parent [-] | |
> still notice that style of writing instinctively, and it still automatically flips a switch in my brain See, this is what worries me. We have unknowable years of instinct, and none of it is tuned for what is happening now. |