▲ | -__---____-ZXyw a day ago | |
It might be the ultimate irony if we were training them. But we aren't, at least not in the sense that we train dogs. Dogs learn, and exhibit some form of intelligence. LLMs do not. It's one of many unfortunate anthropomorphic buzz words which conveniently wins hearts and minds (of investors) over to this notion that we're tickling the gods, rather than the more mundane fact that we're training tools for synthesising and summarising very, very large data sets. | ||
▲ | gmueckl a day ago | parent [-] | |
I don't know how the verb "to train" became the technical shorthand for running gradient descent on a large neural network. But that's orthogonal to the fact that these stories are very, very likely part of the training dataset and thus something that the network is optimized to approximate. So no matter how technical you want to be in wording it, the fundamental irony of cautionary tales (and the bad behavior in them) being used as optimization targets remains. |