The federated learning network requires all the connection weights to be shared among the server and clients during training which increases the risk of data leakage. Meanwhile, the traditional federated learning method has a poor diagnostic effect for non-independently identically distributed data. In order to address these issues, a multi-level federated network based on interpretable indicators was proposed in this manuscript. Firstly, an interpretable adaptive sparse deep network is constructed based on the interpretability principle. Secondly, the relevance map of the network is constructed based on interpretable indicators. Based on this map, the contribution of the connection weights in the network is used to build a multi-level federated network. Finally, the effectiveness of the proposed algorithm has been proved through experimental validation in the paper.
The drive rolling bearing is an important part of a ship’s system; the detection of the drive rolling bearing is an important component in ship-fault diagnosis, and machine learning methods are now widely used in the fault diagnosis of rolling bearings. However, training methods based on small batches have a disadvantage in that the samples which best represent the gradient descent direction can be disturbed by either other samples in the opposite direction or anomalies. Aiming at this problem, a sparse denoising gradient descent (SDGD) optimization algorithm, based on the impact values of network nodes, was proposed to improve the updating method of the batch gradient. First, the network is made sparse by using the node weight method based on the mean impact value. Second, the batch gradients are clustered via a distribution-density-based clustering method. Finally, the network parameters are updated using the gradient values after clustering. The experimental results show the efficiency and feasibility of the proposed method. The SDGD model can achieve up to a 2.35% improvement in diagnostic accuracy compared to the traditional network diagnosis model. The training convergence speed of the SDGD model improves by 2.16%, up to 17.68%. The SDGD model can effectively solve the problem of falling into the local optimum point while training a network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.