Kernel principal component analysis (KPCA) has shown excellent performance in monitoring nonlinear industrial processes. However, model building, updating, and online monitoring using KPCA are generally time-consuming when massive data are obtained under the normal operation condition (NOC). The main reason is that the eigen-decomposition of a high-dimensional kernel matrix constructed from massive NOC samples is computationally complex. Many studies have been devoted to solving this problem through reducing the NOC samples, but a KPCA model constructed from the reduced sample set cannot ensure good performance in monitoring nonlinear industrial processes. The performance of a KPCA model depends on whether the results of the eigen-decomposition of the reduced kernel matrix can well approximate that of the original kernel matrix. To improve the efficiency of KPCA-based process monitoring, this paper proposes randomized KPCA for monitoring nonlinear industrial processes with massive data. The proposed method uses random sampling to compress a kernel matrix into a subspace which maintains most of the useful information about process monitoring. Then, the reduced kernel matrix is operated to obtain desired eigen-decomposition results. On the basis of these approximated eigen-decomposition results, the proposed randomized KPCA can enhance the performance in monitoring nonlinear industrial processes. This is because the commonly used monitoring statistics are related to the eigenvalues and eigenvectors of the kernel matrix. Finally, numerical simulation and the benchmark TE chemical process are used to demonstrate the effectiveness of the proposed method.
A problem is aroused in multi-classifier system that normally each of the classifiers is considered equally important in evidences’ combination, which gone against with the knowledge that different classifier has various performance due to diversity of classifiers. Therefore, how to determine the weights of individual classifier in order to get more accurate results becomes a question need to be solved. An optimal weight learning method is presented in this paper. First, the training samples are respectively input into the multi-classifier system based on Dempster-Shafer theory in order to obtain the output vector. Then the error is calculated by means of figuring up the distance between the output vector and class vector of corresponding training sample, and the objective function is defined as mean-square error of all the training samples. The optimal weight vector is obtained by means of minimizing the objective function. Finally, new samples are classified according to the optimal weight vector. The effectiveness of this method is illustrated by the UCI standard data set and electric actuator fault diagnostic experiment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.