Due to the vast and rapid increase in the size of data, machine learning has become an increasingly popular approach of data classification, which can be done by training a single classifier or a group of classifiers. A single classifier is typically learned by using a standard algorithm, such as C4.5. Due to the fact that each of the standard learning algorithms has its own advantages and disadvantages, ensemble learning, such as Bagging, has been increasingly used to learn a group of classifiers for collaborative classification, thus compensating for the disadvantages of individual classifiers. In particular, a group of base classifiers need to be learned in the training stage, and then some or all of the base classifiers are employed for classifying unseen instances in the testing stage. In this paper, we address two critical points that can impact the classification accuracy, in order to overcome the limitations of the Bagging approach. Firstly, it is important to judge effectively which base classifiers qualify to get employed for classifying test instances. Secondly, the final classification needs to be done by combining the outputs of the base classifiers, i.e. voting, which indicates that the strategy of voting can impact greatly on whether a test instance is classified correctly. In order to address the above points, we propose a nature-inspired approach of ensemble learning to improve the overall accuracy in the setting of granular computing. The proposed approach is validated through experimental studies by using real-life data sets. The results show that the proposed approach overcomes effectively the limitations of the Bagging approach.