▲ | witnessme 4 days ago | ||||||||||||||||
For folks wondering whether to read or not, here is the conclusion from the paper verbatim > We have demonstrated that a single transformer-based model can effectively learn and predict the dynamics of diverse physical systems without explicit physics-specific features, marking a significant step toward true Physics Foundation Models. GPhyT not only outperforms specialized architectures on known physics by up to an order of magnitude but, more importantly, exhibits emergent in-context learning capabilities—inferring new boundary conditions and even entirely novel physical phenomena from input prompts alone. | |||||||||||||||||
▲ | godelski 3 days ago | parent [-] | ||||||||||||||||
Thanks, I haven't been able to give the paper a proper read, but are they're basing claims via results or the ability to recover physics equations? Because those two things are very different. You can have models that make accurate predictions without having accurate models of "the world" (your environment, not necessarily the actual world)[0]. We can't meaningful call something a physics model (or a world model) without that counterfactual recovery (you don't need the exact laws of physics but you need something reasonable). After all, our physics equations are the most compressed forms or representing the information we're after. I ask because this is a weird thing that happens in a lot of ML papers when approaching world models. But just looking at results isn't enough to conclude if a world is being modeled. Doesn't even tell you if that's self consistent, let alone counterfactual. [0] classic example is the geocentric model. They made accurate predictions, which is why it stayed around for so long. It's not like the heliocentric model didn't present new problems. There was reason for legitimate scientific debate at the time but that context is easily lost to history. | |||||||||||||||||
|