| ▲ | sethev 8 hours ago | |||||||
LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are... | ||||||||
| ▲ | afro88 5 hours ago | parent | next [-] | |||||||
I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative) The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented. | ||||||||
| ||||||||
| ▲ | antonvs 6 hours ago | parent | prev | next [-] | |||||||
Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI: > That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you. | ||||||||
| ▲ | adi_kurian 3 hours ago | parent | prev | next [-] | |||||||
Contractions | ||||||||
| ▲ | computably 4 hours ago | parent | prev [-] | |||||||
You don't have to be good at identifying AI generated text to detect low-effort slop. | ||||||||