Ensemble learning in machine learning applications is crucial because it leverages the collective wisdom of multiple models to enhance predictive performance and generalization. Ensemble learning is a method to provide a better approximation of an optimal classifier. A number of basic classifiers are used in ensemble learning. In order to improve performance, it is important for the basic classifiers to possess adequate efficacy and exhibit distinct classification errors. Additionally, an appropriate technique should be employed to amalgamate the outcomes of these classifiers. Numerous methods for ensemble classification have been introduced, including voting, bagging and reinforcement methods. In this particular study, an ensemble classifier that relies on the weighted mean of the basic classifiers' outputs was proposed. To estimate the combination weights, a multi-objective genetic algorithm, considering factors such as classification error, diversity, sparsity, and density criteria, was utilized. Through implementations on UCI datasets, the proposed approach demonstrates a significant enhancement in classification accuracy compared to other conventional ensemble classifiers. In summary, the obtained results showed that genetic-based ensemble classifiers provide advantages such as enhanced capability to handle complex datasets, improved robustness and generalization, and flexible adaptability. These advantages make them a valuable tool in various domains, contributing to more accurate and reliable predictions. Future studies should test and validate this method on more and larger datasets to determine its actual performance.