| ▲ | rmunn 2 hours ago | |
I've seen so many LLM-generated articles by this point that obviously had no human editing done beforehand — just prompt and slap it onto the Web — that it makes me wonder every time. If I read this article, will I actually learn only truth? Or are there some key parts of this article that are actually false because the LLM hallucinated them, and the human involved didn't bother to double-check the article before publishing it? If someone was just using the LLM for style, that's fine. But if they were using it for content, I just can't trust that it's accurate. And the time cost for me to read the article just isn't worth it if there's a chance it's wrong in important ways, so when I see obvious signs of LLM use, I just skip and move on. Now, if someone acknowledged their LLM use up front and said "only used for style, facts have been verified by a human" or whatever, then I'd have enough confidence in the article to spend the time to read it. But unacknowledged LLM use? Too great a risk of uncorrected hallucinations, in my experience, so I'll skip it. | ||