With the widespread use of end devices, online multi-label learning has become popular as the data generated by users using the Internet of Things devices have become huge and rapidly updated. However, in many scenarios, the user data are often generated in a geographically distributed manner that is often inefficient and difficult to centralize for training machine learning models. At the same time, current mainstream distributed learning algorithms always require a centralized server to aggregate data from distributed nodes, which inevitably causes risks to the privacy of users. To overcome this issue, we propose a distributed approach for multi-label classification, which trains the models in distributed computing nodes without sharing the source data from each node. In our proposed method, each node trains its model with its local online data while it also learns from the neighbour nodes without transferring the training data. As a result, our proposed method achieved the online distributed approach for multi-label classification without losing performance when taking existing centralized algorithms as a reference. Experiments show that our algorithm outperforms the centralized online multi-label classification algorithm in F1 score, being 0.0776 higher in macro F1 score and 0.1471 higher for micro F1 score on average. However, for the Hamming loss, both algorithms beat each other on some datasets, and our proposed algorithm loses 0.005 compared to the centralized approach on average, which can be neglected. Furthermore, the size of the network and the degree of connectivity are not factors that affect the performance of this distributed online multi-label learning algorithm.
Semi-supervised learning (SSL) is a popular research area in machine learning which utilizes both labeled and unlabeled data. As an important method for the generation of artificial hard labels for unlabeled data, the pseudo-labeling method is introduced by applying a high and fixed threshold in most state-of-the-art SSL models. However, early models prefer certain classes that are easy to learn, which results in a high-skewed class imbalance in the generated hard labels. The class imbalance will lead to less effective learning of other minority classes and slower convergence for the training model. The aim of this paper is to mitigate the performance degradation caused by class imbalance and gradually reduce the class imbalance in the unsupervised part. To achieve this objective, we propose FocalMatch, a novel SSL method that combines FixMatch and focal loss. Our contribution of FocalMatch adjusts the loss weight of various data depending on how well their predictions match up with their pseudo labels, which can accelerate system learning and model convergence and achieve state-of-the-art performance on several semi-supervised learning benchmarks. Particularly, its effectiveness is demonstrated with the dataset that has extremely limited labeled data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.