▲ | omnicognate 4 days ago | ||||||||||||||||
Understanding how they work in the sense that permits people to invent and implement them, that provides the exact steps to compute every weight and output, is not "meaningful"? There is a lot left to learn about the behaviour of LLMs, higher-level conceptual models to be formed to help us predict specific outcomes and design improved systems, but this meme that "nobody knows how LLMs work" is out of control. | |||||||||||||||||
▲ | recursive 4 days ago | parent [-] | ||||||||||||||||
None of that is inherent, and vanishingly few of Anthropic's users invented LLMs. | |||||||||||||||||
|