| ▲ | wodenokoto 10 hours ago | |
Are boosted decision trees the same as a boosted random forest? | ||
| ▲ | boccaff 10 hours ago | parent | next [-] | |
short answer: No. longer answer: Random forests use the average of multiple trees that are trained in a way to reduce the correlation between trees (bagging with modified trees). Boosting trains sequentially, with each classifier working on the resulting residuals so far. I am assuming that you meant boosted decision trees, sometimes gradient boosted decisions trees, as usually one have boosted decision trees. I think xgboost added boosted RF, and you can boost any supervised model, but it is not usual. | ||
| ▲ | hansvm 9 hours ago | parent | prev [-] | |
The training process differs, but the resulting model only differs in data rather than code -- you evaluate a bunch of trees and add them up. For better or for worse (usually for better), boosted decision trees work harder to optimize the tree structure for a given problem. Random forests rely on enough trees being good enough. Ignoring tree split selection, one technique people sometimes do makes the two techniques more related -- in gradient boosting, once the splits are chosen it's a sparse linear algebra problem to optimize the weights/leaves (iterative if your error is not MSE). That step would unify some part of the training between the two model types. | ||