▲ | throwbway37383 16 hours ago | |
You say you don't usually defend LLMs, and then give a defense of LLMs based on a giant misreading of what is absolutely standard human behaviour. In my local library recently, they'd two boards in the lobby as you entered, one with all the drawings created by one class of ~7 year olds based on some book they read, and a second the same idea but the next class up on some other book. Both classes had apparently been asked to do a drawing that illustrated something they liked or thought about the book. It was absolutely hilarious, and wild, and some genuinely exquisite ones. Some had writing, some didn't. Some had crazy absolutely nonsensical twists and turns in the writing, others more crazy art stuff going on. There were a few tropes that repeated in some of the lazier ones, but even those weren't all the same thing, the way LLM output consistently is, with few exceptions, if any. And then there were a good number of the ones by the kids which were shockingly inventive, you'd be scratching your head going, geez, how did they come up with that. My partner and I stayed for 10 minutes, and kept noticing some new detail in another of them, and being amazed. So the reality is the upside-down version of what you're saying. I recognise that this is just an anecdote on the internet, but surely you know this to be true, variants on the experiment are done in classrooms around the world every day. So may I insist, that the work produced by children, at least, does not fit your odd view of human beings. | ||
▲ | Antibabelic 14 hours ago | parent [-] | |
LLMs and image generation models will also give crazy variable output when you give an open-ended prompt and increase temperature. However, we usually want high coherence and relevance, both from human and synthetic responses. |