“…Then, a few examples with higher confidence levels are selected to retrain the ensemble classifier together with L. However, it is not guaranteed that adding the selected data to the training data will lead to a situation in which the classification performance can be improved [35]. Therefore, various approaches have been proposed in the literature for selecting a small amount of useful unlabeled data (U s ) from U : these include the self-training [25,30,40] and co-training [3,12,18] approaches, confidence-based approaches [19,20,23], density/distance-based approaches [8,27,28], and other approaches used in active learning (AL) algorithms [7,11,33].…”