Remix.run Logo
ffsm8 2 hours ago

Its ai written though, the tells are in pretty much every paragraph.

ratsimihah 2 hours ago | parent | next [-]

I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”

Thanemate 36 minutes ago | parent | next [-]

>Most people use ai to rewrite or clean up content

I think your sentence should have been "people who use ai do so to mostly rewrite or clean up content", but even then I'd question the statistical truth behind that claim.

Personally, seeing something written by AI means that the person who wrote it did so just for looks and not for substance. Claiming to be a great author requires both penmanship and communication skills, and delegating one or either of them to a large language model inherently makes you less than that.

However, when the point is just the contents of the paragraph(s) and nothing more then I don't care who or what wrote it. An example is the result of a research, because I'd certainly won't care about the prose or effort given to write the thesis but more on the results (is this about curing cancer now and forever? If yes, no one cares if it's written with AI).

With that being said, there's still that I get anywhere close to understanding the author behind the thoughts and opinions. I believe the way someone writes hints to the way they think and act. In that sense, using LLM's to rewrite something to make it sound more professional than what you would actually talk in appropriate contexts makes it hard for me to judge someone's character, professionalism, and mannerisms. Almost feels like they're trying to mask part of themselves. Perhaps they lack confidence in their ability to sound professional and convincing?

pmg101 2 hours ago | parent | prev | next [-]

I don't judge content for being AI written, I judge it for the content itself (just like with code).

However I do find the standard out-of-the-box style very grating. Call it faux-chummy linkedin corporate workslop style.

Why don't people give the llm a steer on style? Either based on your personal style or at least on a writer whose style you admire. That should be easier.

xoac an hour ago | parent [-]

Because they think this is good writing. You can’t correct what you don’t have taste for. Most software engineers think that reading books means reading NYT non-fiction bestsellers.

shevy-java 2 hours ago | parent | prev | next [-]

Well, real humans may read it though. Personally I much prefer real humans write real articles than all this AI generated spam-slop. On youtube this is especially annoying - they mix in real videos with fake ones. I see this when I watch animal videos - some animal behaviour is taken from older videos, then AI fake is added. My own policy is that I do not watch anything ever again from people who lie to the audience that way so I had to begin to censor away such lying channels. I'd apply the same rationale to blog authors (but I am not 100% certain it is actually AI generated; I just mention this as a safety guard).

pi-rat an hour ago | parent | prev | next [-]

The main issue with evaluating content for what it is is how extremely asymmetric that process has become.

Slop looks reasonable on the surface, and requires orders of magnitude more effort to evaluate than to produce. It’s produced once, but the process has to be repeated for every single reader.

Disregarding content that smells like AI becomes an extremely tempting early filtering mechanism to separate signal from noise - the reader’s time is valuable.

elaus 2 hours ago | parent | prev | next [-]

I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.

rmnclmnt 2 hours ago | parent [-]

And maybe writing an article or a keynote slides is one of the few places we can still exerce some human creativity, especially when the core skills (programming) is almost completely in the hands of LLMs already

ffsm8 2 hours ago | parent | prev [-]

> I don’t think it’s that big a red flag anymore.

It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop.

Not worth interacting with, imo

Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)

handfuloflight an hour ago | parent | prev [-]

So is GP.

This is clearly a standard AI exposition:

LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.