“…Feature spaces combining two or more of the above descriptors are also frequent, with descriptors customarily extracted from the whole image, yet in some cases, regular or randomly distributed sub-windows/patches have been used, either on their own, or in conjunction to the whole image feature space. A number of well-established classifiers have been assessed, including (i) k-Nearest Neighbours (kNN) (André et al, 2012b;Desir et al, 2010;Hebert et al, 2012;Saint-Réquier et al, 2009;Srivastava et al, 2005;Srivastava et al, 2008), (ii) Linear and Quadratic Discriminant Analysis (LDA and QDA) (Leonovych et al, 2018;Srivastava et al, 2005;Srivastava et al, 2008), (iii) Support Vector Machines (SVM) and their adaptation with Recursive Feature Elimination (SVM-RFE) (Desir et al, 2010;Desir et al, 2012b;Jaremenko et al, 2015;Leonovych et al, 2018;Petitjean et al, 2009;Rakotomamonjy et al, 2014;Saint-Réquier et al, 2009;Vo et al, 2017;Wan et al, 2015;Zubiolo et al, 2014), (iv) Random Forests (RF) and variants such as Extremely Randomised Trees (ET) (Desir et al, 2012a;Heutte et al, 2016;Jaremenko et al, 2015;Leonovych et al, 2018;Seth et al, 2016;Vo et al, 2017), (v) Gaussian Mixture Models (GMM) (He et al, 2012;Perperidis et al, 2016), (vi) Boosted Cascade of Classifiers (Hebert et al, 2012), (vii) Neural Networks (NN) (Ştefănescu et al, 2016), (viii) Gaussian Processes Classifiers (GPC), and (ix) Lasso Generalised Linear Models (GLM) (Seth et al, 2016). Most studies employed leave-k-out and k-fold cross validation to assess the predictive capacity of the proposed methodology on limited, pre-annotated frames.…”