Fault diagnosis and prognosis (FDP) tries to recognize and locate the faults from the captured sensory data, and also predict their failures in advance, which can greatly help to take appropriate actions for maintenance and avoid serious consequences in industrial systems. In recent years, deep learning methods are being widely introduced into FDP due to the powerful feature representation ability, and its rapid development is bringing new opportunities to the promotion of FDP. In order to facilitate the related research, we give a summary of recent advances in deep learning techniques for industrial FDP in this paper. Related concepts and formulations of FDP are firstly given. Seven commonly used deep learning architectures, especially the emerging generative adversarial network, transformer, and graph neural network, are reviewed. Finally, we give insights into the challenges in current applications of deep learning-based methods from four different aspects of imbalanced data, compound fault types, multimodal data fusion, and edge device implementation, and provide possible solutions, respectively. This paper tries to give a comprehensive guideline for further research into the problem of intelligent industrial FDP for the community.
Mobile edge computing is a new computing paradigm that can extend cloud computing capabilities to the edge network, supporting computation-intensive applications such as face recognition, natural language processing, and augmented reality. Notably, computation offloading is a key technology of mobile edge computing to improve mobile devices’ performance and users’ experience by offloading local tasks to edge servers. In this paper, the problem of computation offloading under multiuser, multiserver, and multichannel scenarios is researched, and a computation offloading framework is proposed that considering the quality of service (QoS) of users, server resources, and channel interference. This framework consists of three levels. (1) In the offloading decision stage, the offloading decision is made based on the beneficial degree of computation offloading, which is measured by the total cost of the local computing of mobile devices in comparison with the edge-side server. (2) In the edge server selection stage, the candidate is comprehensively evaluated and selected by a multiobjective decision based on the Analytic Hierarchy Process based on Covariance (Cov-AHP) for computation offloading. (3) In the channel selection stage, a multiuser and multichannel distributed computation offloading strategy based on the potential game is proposed by considering the influence of channel interference on the user’s overall overhead. The corresponding multiuser and multichannel task scheduling algorithm is designed to maximize the overall benefit by finding the Nash equilibrium point of the potential game. Amounts of experimental results show that the proposed framework can greatly increase the number of beneficial computation offloading users and effectively reduce the energy consumption and time delay.
With the development of mobile edge computing (MEC), more and more intelligent services and applications based on deep neural networks are deployed on mobile devices to meet the diverse and personalized needs of users. Unfortunately, deploying and inferencing deep learning models on resource-constrained devices are challenging. The traditional cloud-based method usually runs the deep learning model on the cloud server. Since a large amount of input data needs to be transmitted to the server through WAN, it will cause a large service latency. This is unacceptable for most current latency-sensitive and computation-intensive applications. In this paper, we propose Cogent, an execution framework that accelerates deep neural network inference through device-edge synergy. In the Cogent framework, it is divided into two operation stages, including the automatic pruning and partition stage and the containerized deployment stage. Cogent uses reinforcement learning (RL) to automatically predict pruning and partition strategies based on feedback from the hardware configuration and system conditions so that the pruned and partitioned model can better adapt to the system environment and user hardware configuration. Then through containerized deployment to the device and the edge server to accelerate model inference, experiments show that the learning-based hardware-aware automatic pruning and partition scheme can significantly reduce the service latency, and it accelerates the overall model inference process while maintaining accuracy. Using this method can accelerate up to 8.89× without loss of accuracy of more than 7%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.