| ▲ | mattnewton 15 hours ago | |||||||
I think it’s a popular style before gen ai and the training process of LLMs picked up on that. | ||||||||
| ▲ | andy99 15 hours ago | parent [-] | |||||||
That’s not how LLMs work, it’s part of the reinforcement learning or SFT dataset, data labelers would have written or generated tons of examples using this and other patterns (all the emoji READMEs for example) that the models emulate. The early ones had very formulaic essay style outputs that always ended with “in conclusion”, lots of the same kind of bullet lists, and a love of adjectives and delving, all of which were intentionally trained in. It’s more subtle now but it’s still there. | ||||||||
| ||||||||