▲ | lsecondario 10 hours ago | |
I like this analogy a lot for non-technical...erm...audiences. I do hope that anyone using this analogy will pair it with loud disclaimers about not anthropomorphizing LLMs; they do not "lie" in any real sense, and I think framing things in those terms can give the impression that you should interpret their output in terms of "trust". The emergent usefulness of LLMs is (currently at least) fundamentally opaque to human understanding and we shouldn't lead people to believe otherwise. |