Remix.run Logo
Aurornis an hour ago

For a while there were a lot of posts from people experimenting with ChatGPT to write anger bait posts on Reddit where they would later edit the post to say it was fake, written by ChatGPT.

I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.

However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.

This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.

In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.

chromacity an hour ago | parent | next [-]

We do precisely the same thing here. Here's a relatively recent post that, to me, seems obviously LLM-written. It just rattles off some management platitudes:

https://news.ycombinator.com/item?id=47913650

It had 639 comments and 866 upvotes. And that's not a one-off.

walrus01 2 minutes ago | parent | next [-]

Sufficiently advanced "AI" is indistinguishable from a linkedin true believer koolaid drinker middle management type.

coldtea 32 minutes ago | parent | prev [-]

I wish there was an internet-wide "don't show again" button for such slop pages

coldtea 34 minutes ago | parent | prev [-]

>However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake

That's 90% of current Facebook pages and groups.