| ▲ | jjk166 17 hours ago | |
> And the epistemology shifts in ways that might be uncomfortable. Instead of "I understand the causal mechanism and can predict what happens if I change X," you get something more like "I have a sufficiently rich model that I can simulate what happens if I change X, with probabilistic confidence." The answers are distributions, not deterministic outputs. That's a different kind of knowing. Being able to simulate something is not a kind of knowing. It is, in fact, the opposite of knowing. If you know how a system behaves, there is no need to simulate it. In particular, if the model you need to simulate it is way more complicated then the phenomenon itself, you really really don't understand it. I'm reminded of Feynman's observation that to simulate a quantum system, like an atom, with classical methods requires a tremendous number of atoms, and his intuition that there should be a much smaller way to perform such calculations. This is the conceptual underpinning of quantum computation. A billion parameter neural network may work as a functional tool, but the fact is these supposedly complex problems simply don't have billions of relevant free parameters. You're not going to understand a hurricane by feeding terabytes of data to find the butterfly that flapped its wings in just the wrong way at just the wrong time. Sure extremely small differences in starting conditions can have lead to radically different outcomes, and a butterfly flapping its wings could have influenced a hurricane in some way. But if you understand how hurricanes work, you know that butterfly's influence is just noise - the hurricane started and progresses as it does because of temperature gradients on the ocean. If you found and stopped the butterfly from flapping its wings, the conditions for the hurricane would still exist and something else would set it in motion. Billion parameter theories work in practice because if you throw everything at the wall, the small amount of stuff that can stick will. Likewise if you throw enough data at a problem, whatever data is actually relevant will be analyzed. This can be useful as a stepping stone to understanding, interrogating the model to reveal which parameters have more relevance and the wights of their interactions. But the idea that because you have a tool that addressed a symptom of your ignorance means you are no longer ignorant is folly. | ||
| ▲ | BobbyTables2 17 hours ago | parent [-] | |
I think “Hitchhikers’ Guide to the Galaxy” passage talking about the train crashes from a broken clock was extremely prescient. I feel like enormous models will end up this way… | ||