| ▲ | jmalicki 11 hours ago | |
I feel like I'm a polyglot here but primarily a native frequentist thinker. I've found Bayesian methods shine in cases of an "intractible partition function". Cases such as language models, where the cardinality of your discrete probability distribution is extremely large, to the point of intractability. Bayesians tend to immediately go to things like Monte Carlo estimation. Is that fundamentally Bayesian and anti-frequentist? Not really... it's just that being open to Bayesian ways of thinking leads you towards that more. Reinforcement learning also feels much more naturally Bayesian. I mean Thompson sampling, the granddaddy of RL, was developed through a frequentist lens. But it also feels very Bayesian as well. In the modern era, we have Stein's paradox, and it all feels the same. Hardcore Bayesians that seem to deeply hate the Kolmogorov measure theoretic approach to probability are always interesting to me as some of the last true radicals. I feel like 99% of the world today is these are all just tools and we use them where they're useful. | ||
| ▲ | jb1991 8 hours ago | parent [-] | |
When you are using something like Monte Carlo you’re probably using some method that’s more advanced than the Naïve Bayes, is that right? | ||