We would like to congratulate the authors for having put together a very interesting, thoughtful, and timely review paper about Random Forests (RF). This paper will definitely serve as a reference on RF and foster further theoretical and applied research about this family of algorithms.In our comments below, we will refer to Biau and Scornet's review paper as BS. Based on our own experience with tree-based methods, we bring up a few complementary points that were not explicitly addressed in BS, and which we believe to be of importance to well position some relevant related work and also to foster some interesting directions for further work.
Alternative randomization schemesResearch on tree-based ensemble methods dates back to the 19-nineties and was motivated at that time by the success of generic ensemble methods for supervised learning, such as Breiman (1996)'s Bagging and Freund and Schapire (1997)'s Adaboost algorithms, that can be combined with any kind of base-model. It was soon found that these ensemble methods are verry effective when applied on top of decision or regression trees, and this success fostered further research, in particular towards the design of alternative randomization techniques exploiting the specificities of tree-based models. Among the methods that were published at that time, Breiman's Random Forest method (BRF) was designed by combining two randomization techniques previously proposed in two generic ensemble methods: Breiman (1996)'s own bagging idea, by using bootstrap resampling of the learning