Remix.run Logo
throwaway150 11 hours ago

Does this read like AI slop to anyone else?

That whole "Profiles don't/aren't just $THIS; they're also $THAT" construction is classic LLM output. Then you've got the weird confusing inconsistencies like calling profiles a new feature when they aren't and there's also the rule of 3 ("avatars, colours, naming", "set boundaries, protect your information and make the internet a little calmer"). It all feels machine-written. Even the comparison of tidying your tabs to setting boundaries seems meaningless. It's just the sort of empty parallels AI loves to make.

It's a short article but I really had to power through it because with every sentence I kept thinking, this is not written by a human. If it is AI-generated slop, that'd explain why some parts of it doesn't make any sense.

small_scombrus 7 hours ago | parent | next [-]

> $THING isn't just $THIS, it's also $THAT!

Is pure marketing speak, which is also what I find a lot of LLM generated text sounds like

hmstx an hour ago | parent [-]

Someone in a past thread here mentioned how they enjoyed the help of LLMs to generate all their PR marketing nonsense blurbs, because they looked just as good as the real thing. It might have been 2-3 years ago but I still joke about this with coworkers when the conversations shift to "AI".

gdulli 11 hours ago | parent | prev [-]

An idea we'll have to start getting used to is that people who read enough of the slop might begin to emulate it without necessarily meaning to. The homogeneity will be contagious.

krick 8 hours ago | parent [-]

I still don't quite understand where ChatGPT and its pals learned this. Sure, all these PR copywriters are notoriously bad at writing, but still, I don't think I often met all this crap in many texts before. I mean, if I did, I wouldn't be noticing it now as that ChatGPT style. So why does it write like that? Is it even how Anthropic models write as well (never used them)? Is it some OpenAI RL artifact, or is it something deeper, something about the language itself?

I cannot even always quite formulate, what irks me about its output so much. Except for that "it's not X, it's Y" pattern. For non-English it may be easier, because it really just cannot use idioms properly, it's super irritating. But I wouldn't say it doesn't know English. Yet it somehow always manages to write in uncannily bad style.

_flux 2 hours ago | parent | next [-]

I think the idiom by itself is good, and that could be the reason why LLMs prefer it: a hypothetical test groups liked it.

The problem comes from that LLM prefers it way too often.

And I suppose it would be a bit difficult to change it to actually be good: even if it just used it once per session, it might still choose to use it on every session, as it cannot see how much an idiom has been used in other ones.

machomaster 6 hours ago | parent | prev [-]

Looks like you are much worse at understanding writing than you think.

Contrasting, rule of three, etc. are basic writing techniques that are common among good writers because they are good. This is the reason why AI learned to use them - because they work very well in communication.