Remix.run Logo
botusaurus 2 hours ago

you know why LLMs repeat those patterns so much? because that's how real humans speak

Starlevel004 an hour ago | parent | next [-]

Real humans don't speak in LinkedIn Standard English

swiftcoder 42 minutes ago | parent | next [-]

"LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere

chuckadams an hour ago | parent | prev | next [-]

LinkedIn and its robotic tone existed long before generative AI.

Know what's more annoying than AI posts? Seeing accusations of AI slop for every. last. god. damned. thing.

IshKebab an hour ago | parent [-]

Yes that's the point. LLMs pretty much speak LinkedInglish. That existed before LLMs, but only on LinkedIn.

So if you see LinkedInglish on LinkedIn, it may or may not be an LLM. Outside of LinkedIn... probably an LLM.

It is curious why LLMs love talking in LinkedInglish so much. I have no idea what the answer to that is but they do.

cookiengineer 33 minutes ago | parent | prev [-]

> LinkedIn Standard English

We need a dictionary like this :D

ndtimes an hour ago | parent | prev [-]

marketing people did have a tendency to be more bombastic and self aggrandizing than average, speaking pompous but devoid of meaning shit but normal people most definitely not speak like this:

> The build.bat above isn’t just a helper script; it’s a declaration of independence from the Visual Studio Installer

this is 100% GPT slop, you can even tell it's GPT specifically from the fact that it has a ; instead of — because the recent models were trained to use the emdash less and put a semicolon in the same places it used to throw emdashes in the past.

GPT-4o would have done

>The build.bat above isn’t just a helper script—it’s a declaration of independence from the Visual Studio Installer

>you know why LLMs repeat those patterns so much

Unlike you, I do know why LLMs can fall into repeating certain patterns and it most definitely has nothing to do with "how humans speak". The better the model (as a tool) the more it has been trained on artificially generated data that teaches it the "proper" way to do tasks. Instruction tuned models have nothing to do with the original release of GPT-3, they were their own thing ever since the release of chatGPT itself.

You can control what sort of patterns it falls into and this is why if you had any ability to notice things as a human being you would have seen how newer GPT generated content has less emdash spam even when the human generating the content doesn't bother touching up the text.