| ▲ | beaker52 8 hours ago | |||||||
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective. | ||||||||
| ▲ | sidrag22 7 hours ago | parent | next [-] | |||||||
I usually feed my articles to it and ask for insight into whats working. I usually wait to initiate any sort of AI insight until my rough draft is totally done... Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time. It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time. AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that. | ||||||||
| ||||||||
| ▲ | zahlman 4 hours ago | parent | prev [-] | |||||||
> And it terrifies me that others aren’t quite as objective. I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally. Put more bluntly: slop is slop, and it has been with us for far longer than AI. | ||||||||