Remix.run Logo
kevin42 2 days ago

If they can't distinguish LLM text, then why should they care?

Anti-AI people like to bring up hallucination as if everything AI generates is false.

I can write pages of text, with my own content, and then use AI to improve my writing and clarity. Then I review and edit. It might have some LLM markers in there, which I remove sometimes because it's distracting. But the final, AI assisted writing is easier to read and better organized. But all the ideas are mine. Hallucinations are not remotely a problem in this case.

Forgeties79 2 days ago | parent [-]

If you can’t distinguish between fake images and real ones why should you care?

kevin42 2 days ago | parent [-]

That depends on the purpose of the image.

If it's used to create a false narrative (like a deep fake), sure, you should care. But if it's used as an alternative to a stock photo, or as an easy way to make an infographic then no, I don't think you should care.

joquarky 2 days ago | parent | next [-]

> you should care

Why should I care? The world is full of false narratives.

How can I have the bandwidth to care about everything all of the time?

I swear that more than half of the complaining that I find here comes from priveledged people bike shedding over inane topics, and who have never had to really worry about serious survival-level (how am I going to eat today?) issues in their lives.

Forgeties79 2 days ago | parent | prev [-]

And when an LLM starts hallucinating, and I emphasize “when,” is that not the same issue as creating a false narrative?