| ▲ | forgetfreeman 4 hours ago | |
You're reversing causality here. LLMs train on massive bodies of human-generated content. Constructs like the ones mentioned are an entirely unremarkable staple of long-form text content produced for audiences who are accustomed to consuming long-form text content. | ||
| ▲ | mapt an hour ago | parent [-] | |
The formula they have generalized their responses to in basic explainer mode is pretty distinctive for a lot of us who are otherwise used to reading long-form written pieces. | ||