▲ | svara 6 days ago | ||||||||||||||||
If I take what you just wrote together with the comment I first reacted to, I believe I understand what you're saying as the following: Of a large or infinite number of models, which in limited testing have equal properties, only a small subset will contain actual understanding, a property that is independent of the model's input-output behavior? If that's indeed what you mean, I don't think I can agree. In your 2+a+b-2 example, that is an unnecessarily convoluted, but entirely correct model of addition. Epicycles are a correct model of celestial mechanics, in the limited sense of being useful for specific purposes. The reason we call that model wrong is that it has been made redundant by a different model that is strictly superior - in the predictions it makes, but also in the efficiency of its teaching. Another way to look at it is that understanding is not a property of a model, but a human emotion that occurs when a person discovers or applies a highly compressed representation of complex phenomena. | |||||||||||||||||
▲ | godelski 5 days ago | parent [-] | ||||||||||||||||
I think this is close enough. I'd say "a model's ability to make accurate predictions is not necessarily related to the model's ability to generate counterfactual predictions."I'm saying, you can make extremely accurate predictions with an incorrect world model. This isn't conjecture either, this is something we're extremely confident about in science.
I gave it as a trivial example, not as a complete one (as stated). So be careful with extrapolating limitations of the example with limitations of the argument. For a more complex example I highly suggest looking at the actual history around the heliocentric vs geocentric debate. You'll have to make an active effort to understand this because what you were taught in school is very likely an (very reasonable) over simplification. Would you like a much more complex mathematical example? It'll take a little to construct and it'll be a lot harder to understand. As a simple example you can always take a Taylor expansion of something so you can approximate it, but if you want an example that is wrong and not through approximation then I'll need some time (and a specific ask).Here's a pretty famous example with Freeman Dyson recounting an experience with Fermi[0]. Dyson's model made accurate predictions. Fermi is able to accurately dismiss Dyson's idea quickly despite strong numerical agreement between the model and the data. It took years to determine that despite accurate predictions it was not an accurate world model. *These situations are commonplace in science.* Which is why you need more than experimental agreement. Btw, experiments are more informative than observations. You can intervene in experiments, you can't in observations. This is a critical aspect to discovering counterfactuals. If you want to understand this deeper I suggest picking up any book that teaches causal statistics or any book on the subject of metaphysics. A causal statistics book will teach you this as you learn about confounding variables and structural equation modeling. For metaphysics Ian Hacking's "Representing and Intervening" is a good pick, as well as Polya's famous "How To Solve It" (though it is metamathematics). [0] (Mind you, Dyson says "went with the math instead of the physics" but what he's actually talking about is an aspect of metamathematics. That's what Fermi was teaching Dyson) https://www.youtube.com/watch?v=hV41QEKiMlM | |||||||||||||||||
|