▲ | istjohn 5 days ago | |||||||
We build mental models of things we have not personally experienced all the time. Such mental models lack the detail and vividness of that of someone with first-hand experience, but they are nonetheless useful. Indeed, a student of physics who has never touched a baseball may have a far more accurate and precise mental model of a curve ball than a major league pitcher. | ||||||||
▲ | HarHarVeryFunny 4 days ago | parent [-] | |||||||
Sure, but the nature of the model can only reflect the inputs (incl. corrections) that it was built around. A theoretical model of the aerodynamics of a curve ball isn't going to make the physics prof an expert pitcher, maybe not able to throw a curve ball at all. Given the widely different natures of a theoretical "book smart" model vs a hands-on model informed by the dynamics of the real world and how it responds to your own actions, it doesn't seem useful to call these the same thing. For sure the LLM has, in effect, some sort of distributed statistical model of it's training material, but this is not the same as knowledge represented by someone/something that has hands-on world knowledge. You wouldn't train a autonomous car to drive by giving it an instruction manual and stories of peoples near-miss experiences - you'd train it in a simulator (or better yet real world), where it can learn a real world model - a model of the world you want it to know about and be effective in, not a WORD model of how drivers are likely to describe their encounters with black ice and deer on the road. | ||||||||
|