Remix.run Logo
gdulli 9 hours ago

An idea we'll have to start getting used to is that people who read enough of the slop might begin to emulate it without necessarily meaning to. The homogeneity will be contagious.

krick 7 hours ago | parent [-]

I still don't quite understand where ChatGPT and its pals learned this. Sure, all these PR copywriters are notoriously bad at writing, but still, I don't think I often met all this crap in many texts before. I mean, if I did, I wouldn't be noticing it now as that ChatGPT style. So why does it write like that? Is it even how Anthropic models write as well (never used them)? Is it some OpenAI RL artifact, or is it something deeper, something about the language itself?

I cannot even always quite formulate, what irks me about its output so much. Except for that "it's not X, it's Y" pattern. For non-English it may be easier, because it really just cannot use idioms properly, it's super irritating. But I wouldn't say it doesn't know English. Yet it somehow always manages to write in uncannily bad style.

_flux an hour ago | parent | next [-]

I think the idiom by itself is good, and that could be the reason why LLMs prefer it: a hypothetical test groups liked it.

The problem comes from that LLM prefers it way too often.

And I suppose it would be a bit difficult to change it to actually be good: even if it just used it once per session, it might still choose to use it on every session, as it cannot see how much an idiom has been used in other ones.

machomaster 5 hours ago | parent | prev [-]

Looks like you are much worse at understanding writing than you think.

Contrasting, rule of three, etc. are basic writing techniques that are common among good writers because they are good. This is the reason why AI learned to use them - because they work very well in communication.