Semi-supervised anomaly detection is an approach to identify anomalies by modeling the distribution of normal data. Nowadays, backpropagation neural networks (i.e., BP-NNs) based approaches have been drawing attention because of their high generalization performance for a wide range of real-world data. In a typical application, such BP-NN-based models are iteratively optimized in server machines with a large amount of data gathered from edge devices. However, there are two issues in this framework: (1) BP-NNs' iterative optimization approach often takes too long a time to follow time-series changes of the distribution of normal data (i.e., concept drift), and (2) data transfers between the server machines and the edge devices have a risk to cause data breaches. To address these issues, we propose an ON-device sequential Learning semi-supervised Anomaly Detector called ONLAD and its FPGA-based IP core called ONLAD Core so that various kinds of resource-limited edge devices can use our approach. Experimental results show that ONLAD has favorable anomaly detection capability especially in an environment which simulates concept drift. Evaluations of ONLAD Core confirm that it can perform training and prediction computations faster than BP-NN-based software implementations by x1.95 ∼ x4.51 and x2.29 ∼ x4.73, respectively. We also demonstrate that our on-board implementation which integrates ONLAD Core works at x6.3 ∼ x25.4 lower power consumption while training computations are continuously executed.
Most edge AI focuses on prediction tasks on resource-limited edge devices while the training is done at server machines. However, retraining or customizing a model is required at edge devices as the model is becoming outdated due to environmental changes over time. To follow such a concept drift, a neural-network based on-device learning approach is recently proposed, so that edge devices train incoming data at runtime to update their model. In this case, since a training is done at distributed edge devices, the issue is that only a limited amount of training data can be used for each edge device. To address this issue, one approach is a cooperative learning or federated learning, where edge devices exchange their trained results and update their model by using those collected from the other devices. In this paper, as an on-device learning algorithm, we focus on OS-ELM (Online Sequential Extreme Learning Machine) to sequentially train a model based on recent samples and combine it with autoencoder for anomaly detection. We extend it for an on-device federated learning so that edge devices can exchange their trained results and update their model by using those collected from the other edge devices. This cooperative model update is oneshot while it can be repeatedly applied to synchronize their model. Our approach is evaluated with anomaly detection tasks generated from a driving dataset of cars, a human activity dataset, and MNIST dataset. The results demonstrate that the proposed on-device federated learning can produce a merged model by integrating trained results from multiple edge devices as accurately as traditional backpropagation based neural networks and a traditional federated learning approach with lower computation or communication cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.