Remix.run Logo
FrustratedMonky 3 hours ago

I've noticed a trend of calling every single article "This is AI or LLM, I can't stand it".

And really, you can't tell. Nobody can tell. Humans write badly and blandly also. It's just a trope at this point.

No, you're comment is an LLM.

Night_Thastus 3 hours ago | parent | next [-]

LLMs often have a distinct writing style. It's not guaranteed, you can get false positives and false negatives, but if you start paying attention it becomes obvious in many cases.

chambertime 3 hours ago | parent | next [-]

My poor Reddit has been taken over by bots :(

2ndorderthought 2 hours ago | parent [-]

Reddit is extra cooked soon.

bombcar 2 hours ago | parent [-]

I’m going to assume everyone using “cooked” comes from uThermal and you won’t convince me otherwise.

CamperBob2 2 hours ago | parent | prev | next [-]

That obviously won't be true for much longer, assuming it's still true now, which I doubt. If you're an LLM content farmer, how hard could it possibly be to LoRA your way out of generating cliches like emdashes, 'You're absolutely right!' and 'It's not A, but B' rhetoric?

We should probably go ahead and get over it.

FrustratedMonky 41 minutes ago | parent | prev | next [-]

I guess my point was lost.

It is obvious, when it is obvious. When it is not, you don't know it.

There are ton more false positives now. Everyone is calling everything 'LLM Slop'.

Because there is a lot of slop. Now every bad human writer is being called an AI just for being human.

And, that is covering that a ton of stuff is LLM and nobody can tell.

People that say they can tell the difference are fooling themselves.

gchamonlive 18 minutes ago | parent [-]

Don't beat yourself over it. It's the new sport for HN upvote farmers to default to calling out any TLDR post that got "delve" in it or some other cliché as LLM Slop. I also think it's a waste of time. What's important is the content. Is the content of the article valuable? No? Just close it and move on. But we know the incentives to get a few upvotes is just much too good to pass...

FrustratedMonky 3 hours ago | parent | prev [-]

Yes, if you are using a generic LLM.

But you can tell it to use different styles. To be formal or in-formal, to insert colloquialisms or to remove.

People are depending on their own 'gut-sense' a lot, and not realizing they are really not correct.

If you think all it takes is paying attention, then you are missing it. It's both more widely used than assumed, and also now obscuring what is non-AI.

zahlman 2 hours ago | parent [-]

> But you can tell it to use different styles. To be formal or in-formal, to insert colloquialisms or to remove.

And when you get it right, the result doesn't get called AI generated.

> People are depending on their own 'gut-sense' a lot, and not realizing they are really not correct.

TFA is very obvious about it.

A human who writes like this should be ashamed to do so, and should endeavour to understand why the writing comes across as "generic LLM"-like and fix it.

We have reached a point where people can end up training their writing on generic LLM output. This is a bad thing, because it's bad output.

Even beyond any clues from writing style, the general presentation is bad. It presents far too many facts and figures without giving anyone a good reason to care about most of them. And then it ends with a section on a separate topic (how to choose a lab, rather than how they're distributed across the world).

Most importantly, though, the submission is presented with a different title that implies a different purpose to the article that is not elaborated in the article. I would have expected personal insight a) on why people should care about the FCC's action (there is no mention of that action at all); b) on what the process was like of collecting this data. And I would have expected, you know, mapping of the lab locations rather than bar charts giving geographic breakdowns.

chownie 3 hours ago | parent | prev | next [-]

This article goes ham on the rule of threes, it does the "not just x, but y" cliche, em-dash with spaces on either side, bold heading-sentence paragraphs, it visibly has hallmarks of AI driven writing.

If you personally can't tell then just say that rather than casting aspersions on everyone else by claiming they can't.

godelski 3 hours ago | parent | prev | next [-]

Fun fact, the author admits to using a LLM.

https://news.ycombinator.com/item?id=47963465

FrustratedMonky 33 minutes ago | parent [-]

Not the first time this has happened.

Half the articles are now LLM's.

If he didn't admit it, we'd be arguing over 'style', which itself can be configured.

Prompt> LLM don't use em-dash.

LLM> OK.

alnwlsn 2 hours ago | parent | prev | next [-]

No human* would waste the time to write a piece that is both highly polished while being so long that any useful information is spread so thinly it is essentially empty. This is how people "can tell" if it is written by AI.

Not a dig at this author by the way or saying it applies to this post, just in general.

*or if they did anyway, the result is the same: bad writing.

JumpCrisscross 2 hours ago | parent | next [-]

> a piece that is both long and highly polished while being devoid of useful information

Idk, I learned a little bit about our regulatory system, that a lot of these labs are in China and that those are now banned (and that the ones in India may be next).

The style is admittedly annoying. But I'm glad the author put in the work to highlight something they, and now I through them, found interesting.

CamperBob2 2 hours ago | parent | prev [-]

No human would waste the time to write a piece that is both highly polished while being so long that any useful information is spread so thinly it is essentially empty.

LOL, some of us spent 12 years in public schools refining this very art to perfection.

ramon156 2 hours ago | parent | prev | next [-]

Haha, good one Altman!

44 minutes ago | parent | prev | next [-]
[deleted]
RobRivera 3 hours ago | parent | prev [-]

I wake up, there is another psyop, I go to sleep