Classifier ensembles are pattern recognition structures composed of a set of classification algorithms (members), organized in a parallel way, and a combination method with the aim of increasing the classification accuracy of a classification system. In this study, we investigate the application of a generalized mixture (GM) functions as a new approach for providing an efficient combination procedure for these systems through the use of dynamic weights in the combination process. Therefore, we present three GM functions to be applied as a combination method. The main advantage of these functions is that they can define dynamic weights at the member outputs, making the combination process more efficient. In order to evaluate the feasibility of the proposed approach, an empirical analysis is conducted, applying classifier ensembles to 25 different classification data sets. In this analysis, we compare the use of the proposed approaches to ensembles using traditional combination methods as well as the state-of-the-art ensemble methods. Our findings indicated gains in terms of performance when comparing the proposed approaches to the traditional ones as well as comparable results with the state-of-the-art methods. This paper is divided into eight sections and it is organized as follows. In Section 2, we describe some recent studies in classifier ensembles. The fundamental notions of the generalized mixture functions are introduced in Section 3, while the basic concepts of classifier ensembles are presented in Section 4. The proposed approach is presented in Section 5. In Section 6, the experimental methodology is presented, while Section 7 presents an analysis of the obtained results of this work. Finally, Section 7 concludes this paper.
Recent Studies in Weighted Combination Methods for in Classifier EnsemblesAs already mentioned, different ways of calculating the weights of each class for each individual classifier can be used in a classifier ensemble [10,39,45,46,50,51,57,70]. Although weighted combination methods appear to provide some flexibility, obtaining the optimal weights is not an easy task. Therefore, some optimization techniques have been applied to define the best set of weights, such as in [4,45,51,57]. In [57], for instance, a genetic algorithm (GA) was used to define an optimized set of weights that are used along with the output of the individual classifiers to define the final output of the classifier ensembles. However, all the aforementioned studies apply procedures to define static weights. In other words, these methods define a set of weights that are used throughout the testing phase. This static way to define weights can eventually become inefficient for a classifier ensemble, since the accuracy of an individual classifier can change in the testing search space and this change is not capture by static weights.In a dynamic weighting process, the outputs of all individual classifiers are aggregated and the most competent ones receive the highest weight values. The competence of the classifier o...