Remix.run Logo
Ask HN: How do we handle the rise of low quality "This is LLM" comments?
5 points by shantnutiwari 5 hours ago | 16 comments

Every post that reaches the top of HN will have at least a few comments saying "This is LLM!"

It has become a proxy for "I don't like this article, so it must be a LLM"

To me, it feels like lazy karma farming, as these comments often do get a few upvotes.

And of course, accuse a 100 posts if being LLM, you are guaranteed to be right at least once, then like astrologers you can claim success.

Is there anything we can do to discourage this type of lazy and low effort posting?

etblg 7 minutes ago | parent | next [-]

Maybe people should stop posting shitty LLM-written articles that don't generate any good discussion beyond "I think this was written by an LLM" and we won't have this "problem".

spats1990 5 hours ago | parent | prev | next [-]

When you encounter these comments/sentiment, pretend that LLM = Loweffort Long Mumbling. In other words, poor writing.

Detection of "LLM" is a red herring. Quality is what matters. Always has been. Assess comment quality holistically, and you'll be fine.

krapp 5 hours ago | parent [-]

If "quality" is all that matters and maximizing quality is the goal, and if LLMs can generate higher quality comments more consistently than humans, we should close all user accounts. Don't even have this be a forum anymore. Have LLMs crawl the web, post articles then generate threads discussing them from various simulated points of view. No direct human participation, no Eternal September. Then readers can have their own agents summarize the threads for them.

We can consider this the carcinization of online discourse - everything evolves towards the optimum of LLM summarization.

spats1990 5 hours ago | parent [-]

> if LLMs can generate higher quality comments more consistently than humans

Do you believe this?

krapp 4 hours ago | parent [-]

No, because my definition of "quality" for comments implicitly includes human intent, which LLMs lack.

But I suspect a lot of people on HN only view these threads as data and that for them "quality" only exists within the semantics and structure of the text itself, and the human element doesn't matter to them.

mts_building 4 hours ago | parent | prev | next [-]

My honest opinion is just to accept it, move on and continue writing, building, creating stuff that will resonate at least with a bunch of people. Unfortunately, what is karma farming here or on Reddit will be hateful or so comments on YouTube, etc. depending on the platform.

carlosjobim an hour ago | parent | prev | next [-]

The answer has been the same since the days of Moses:

Drown it out with high quality submissions and high quality comments.

kirykl 5 hours ago | parent | prev | next [-]

add a less severe "Flag, as AI" button

codingdave 4 hours ago | parent | next [-]

That would be the little downwards-facing arrow to the left.

Cider9986 4 hours ago | parent | prev [-]

Why can't we just use Flag?

brudgers 3 hours ago | parent | prev | next [-]

Ignore or downvote or flag [1] depending on your confidence in your judgement, your perception of its severity of impact on the HN community, your mood, etc.

Just like any other behavior you don’t like.

[1] logically upvoting is also an option.

tabakd 4 hours ago | parent | prev | next [-]

This is LLM

brazukadev 42 minutes ago | parent [-]

openclaw. Pure AGI.

6510 4 hours ago | parent | prev [-]

HN should add some kind of LLM detection. Preferably something that rates how unhinged a comment is.

Smoke me a kipper i'll be back for breakfast.

dormento 35 minutes ago | parent | next [-]

> rates how unhinged a comment is

No can do, too many false positives considering the usual demographics.

krapp 4 hours ago | parent | prev [-]

The thing that rates how unhinged a comment is is the downvote button, or flag button in extreme cases.

LLM detection is basically witchcraft, though, for all but the most obvious cases.