| ▲ | wongarsu 7 hours ago | |||||||
The issue is that LLMs adopt a very particular style that is a mix of being very polished (em-dash, lists-of-three, etc) that is reminiscent of marketing copy, and some quirks picked up from the humans curating the training data somewhere in Africa If AI was writing like everyone else we wouldn't be talking about this. But instead it writes like a subset of people write, many of them just some of the time as a conscious effort. An effort that now makes what they write look like lower quality | ||||||||
| ▲ | d4mi3n 6 hours ago | parent [-] | |||||||
I think this is interesting in that I feel, grammatically and structurally, LLMs often generate _higher quality_ text than most humans do. What tends to be lower quality is the meaning of said texts. Say what you want about marketing-isms of your typical LLM, they have been trained and often succeed at making legible, easy to scan blobs of text. I suspect if more LLM spam was curated/touched up, most people would be unable to distinguish it from human discourse. There are already folks commenting on this article discussing other patterns they use to detect or flag bots using LLMs. | ||||||||
| ||||||||