Remix.run Logo
eschatology 3 hours ago

These ai authorship accusations are now everywhere and it's getting really annoying.

It just distracts the discussion away and adds nothing.

exmadscientist 2 hours ago | parent | next [-]

I disagree. Articles written by AI are inherently less trustworthy (they're notorious for fabrications and hallucinations) and often have a very low content density. "Write me a 10 paragraph article about high hardware prices for my blog" is the sort of thing that expands to a lot of fluff with not a lot of content. I don't really want to read that article.

Even if AI might (might) be justifiable as an editor, it's still such a negative signal for "is this worth reading?" that in my opinion it is worthwhile to point out and discuss.

bryanlarsen 2 hours ago | parent | next [-]

This case is interesting, because it seems obvious that the AI accusation is just plain wrong. The article is riddled with the kind of grammatical and spelling mistakes that humans regularly make but that a modern AI would never make.

83 2 hours ago | parent | next [-]

That could easily be part of their prompt. I just did a quick test telling Copilot to "add a few spelling and grammar mistakes to look like a human wrote it" and it does a reasonably convincing job.

bryanlarsen an hour ago | parent [-]

The errors are more than superficial; few authors would be willing to have AI screw up their work enough to invert the meaning of some of their sentences. https://news.ycombinator.com/item?id=47169256

exmadscientist 2 hours ago | parent | prev [-]

It's also very easy to paste a paragraph in to a chatbot and ask it to revise it. Or ask it to write an introduction.

I don't really have a problem with that use of AI.

But one of the costs is reputational: potential readers are now going to assume AI wrote the whole article, fairly or unfairly. That's a consideration writers have to weigh before choosing to do this.

AnimalMuppet 2 hours ago | parent | prev [-]

I agree that AI-written text often has a low content density. I wonder if it's a matter of information theory.

Information theory defines the information of a symbol as being related to how often it occurs and how often it is expected to occur. Something that isn't expected carries more information. (Usually "symbol" is defined as one character or byte, but it could be a word or word part.)

Well, if you think about LLMs that way, they give you the most-probable next word (or word part). That means that they give you less information than normal writing. I suspect that's why it reads as bland, low-content - because it really is low content, in the information theory sense.

Now, it doesn't always give you the most probable next symbol. There is some randomness. And you can increase the randomness by turning up the temperature. But if you do, then I suspect it becomes incoherent more quickly. (Random gibberish may have high information from an information theory standpoint, but humans don't want to read that either.)

inahga 2 hours ago | parent | prev | next [-]

On the contrary, GP's warning saved my time and attention. For that I thank them.

nottorp 2 hours ago | parent | prev | next [-]

Maybe it's because no matter how it was done this is a boring piece that talks about the tiny tiny minority of higher numbers obsessed "gamers" that do upgrade their hardware yearly.

It's not even representative for gamers as a whole.

And starting from that, it's easy to also accuse it of being LLM generated, even if it isn't.

I have no opinion because I couldn't go past the first paragraph. It's not talking about any subgroup I can identify with.

Also after skimming it didn’t say anything new or insightful. No matter if “content creator” or “AI”.

RGamma 2 hours ago | parent | prev | next [-]

LLMs allow you to blow up a crudely defined sentiment into a cloud of semi-plausible sounding blurb too easily.

Should we allow this to normalize, then I'm done with this part of the internet. (And I agree, one has to use this criticism with some care)

2 hours ago | parent | prev [-]
[deleted]