Remix.run Logo
munchler 3 hours ago

I think this is the third HN link I've clicked on in a row that leads to an LLM-generated article. I'm not opposed to AI, but I'm tired of seeing it quietly substituted for human thought and expression.

alex_duf 3 hours ago | parent [-]

I'm seeing this stance a lot "this is obviously AI generated"

Why? What's LLM generated? How can you tell?

To me what's obvious is that our trust system is already breaking down. Commenters accusing each other of being AIs is also another example of this.

gruez 3 hours ago | parent | next [-]

>Why? What's LLM generated? How can you tell?

Not the guy you're responding to, but:

1. The high number of (em) dashes is suspect, though it's unclear whether they manually replaced the em dashes or is actually human generated.

2. "One additional failure worth noting: one incident response professional in the HN thread, raised a concern that operates independently of the bot problem" feels out of place for a content marketing piece. HN isn't popular enough to be invoked as a source, and referencing it as "the HN thread" seems even weirder, as if the author prompted "write a piece about how google cloud defense sucks, here are some sources: ..."

3. This passage is also suspect because it follows the chained negation pattern, though it's n=1

>No hardware identifier is transmitted. No attestation is required. No certification layer determines who may participate.

edit:

I also noticed there are 2 other comments that are flagged/dead expressing their reasons.

ribtoks 2 hours ago | parent | next [-]

> actually human generated

Human written, not generated.

> HN isn't popular enough to be invoked as a source

Excuse me, what do you mean there? The author happens to read HN too.

bakugo 2 hours ago | parent | prev [-]

Looks like the moderators are actively deleting comments that call out AI generated articles now. Grim. This comment will probably be deleted too.

greenchair 2 hours ago | parent [-]

mods hastening dead internet theory

Terretta an hour ago | parent | prev | next [-]

Look at the number of : per paragraph. What human puts two : in a single sentence?

"One additional failure worth noting: one incident response professional in the HN thread, raised a concern that operates independently of the bot problem: …"

The ersatz Ted Talk meets LinkedInfluencer rhythm of sentences, the throat clearing fillers as connective tissue…

Or Wikipedia: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

munchler 3 hours ago | parent | prev | next [-]

The choppy language is the biggest trigger for me. Examples:

* "With Fraud Defense, there was no process to respond to. The product launched. The requirements page went live."

* "That is not a technical limitation waiting to be engineered around. It is the mechanism."

* "The defeat is mechanical. Bot operators point a camera at a screen, a trivial automation with off-the-shelf hardware."

I could be wrong, of course. Maybe humans are starting to write like LLM's, or maybe it's just confirmation bias on my part.

bakugo 3 hours ago | parent | prev [-]

The entire article is just one long stream of short, punchy, declarative sentences. The latest Claude models are notorious for writing like this.

There's also a few cookie-cutter patterns that should immediately jump out at you if you're at all familiar with AI writing, such as:

> No hardware identifier is transmitted. No attestation is required. No certification layer determines who may participate. User privacy is structurally preserved, not promised.

> Google Cloud Fraud Defense is not a reCAPTCHA update. The QR code is the visible mechanism, but device attestation is the real product.