Remix.run Logo
ComplexSystems 3 days ago

"Bayesian counterargument (in caricature form) would be that MLE frequentists just choose an arbitrary (flat) prior, and penalty hyperparameters (common in NN) are a de facto prior."

This has been my view for a while now. Is this not correct?

In general, I think the idea of a big "frequentist vs Bayesian" debate is silly. I think it is very useful to take frequentist ideas and see what they look like from a Bayesian point of view, and vice versa (when applicable). I think this is pretty much the general stance among most people in the field - it's generally expected that one will understand that regularization methods equate to certain priors, for instance, and in general be able to relate these two perspectives as much as possible.

duvenaud 3 days ago | parent [-]

I would argue against the idea that "MLE is just Bayes with a flat prior". The power of Bayes usually comes mainly from keeping around all the hypothesis that are compatible with the data, not from the prior. This is especially true in domains where something black-box (essentially prior-less) like a neural net has any chance of working.