The performance of machine learning models diminishes while predicting the Remaining Useful Life (RUL) of the equipment or fault prediction due to the issue of concept drift. This issue is aggravated when the problem setting comprises multi-class imbalanced data. The existing drift detection methods are designed to detect certain drifts in specific scenarios. For example, the drift detector designed for binary class data may not produce satisfactory results for applications that generate multi-class data. Similarly, the drift detection method designed for the detection of sudden drift may struggle with detecting incremental drift. Therefore, in this experimental investigation, we seek to investigate the performance of the existing drift detection methods on multi-class imbalanced data streams with different drift types. For this reason, this study simulated the streams with various forms of concept drift and the multi-class imbalance problem to test the existing drift detection methods. The findings of current study will aid in the selection of drift detection methods for use in developing solutions for real-time industrial applications that encounter similar issues. The results revealed that among the compared methods, DDM produced the best average F1 score. The results also indicate that the multi-class imbalance causes the false alarm rate to increase for most of the drift detection methods.
Unequal data distribution among different classes usually cause a class imbalance problem. Due to the class imbalance, the classification models become biased toward the majority class and misclassify the minority class. Class imbalance issue becomes more complex when it occurs in multi-class data. The most common method to handle the class imbalance is data resampling that involves either oversampling minority class instances or under-sampling majority class instances. In the case of undersampling, there is a chance of losing some crucial information, whereas over-sampling can cause an overfitting problem. Therefore, we propose a novel Cluster-based Hybrid Sampling for Imbalance Data (CBHSID) strategy to address these issues. We calculate the mean of the data observations based on the number of classes. CBHSID uses the calculated mean as a threshold value to segregate majority and minority classes. We apply affinity propagation cluster analysis to each class to create sub-clusters. We calculate the distance of each data item of sub-cluster using centroid mean. We remove data observations that are away from the center of sub-cluster during under-sampling. On the other hand, during the oversampling, we generate synthetic samples using data observations near to the center of sub-cluster. We compared our proposed approach with a few state-of-the-art data balancing methods on 12 binary and 4 multi-class benchmark datasets. Based on Geometric-Mean (G-Mean), Recall, and F1-score, our method outperformed the other compared methods on 14 datasets out of 16. We identified that CBHSID is suitable for addressing class imbalance issues in both binary and multi-class classifications. In the current state, we have only validated CBHSID on stationary data streams. Consequently, CBHSID can further be tested on non-stationary data streams in online learning environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.