“…Sparse representation classifiers have also been integrated into the boosting model as weak classifiers based on random subsets of reference images [37], [49]. The combination of weak classifiers is normally based on a predefined weighting scheme, such as the majority voting or averaging of probabilities in bagging [5], [14], [15], [22], [47], [48], or choosing the best performing weak classifier at each training iteration with error-based weight computation in boosting [21], [37], [46], [49]. While these weighting schemes are often effective, they are however predefined, greedy, and and might not reflect the best adaptation to the dataset.…”