Convolutional neural networks (CNNs) have received widespread attention due to their powerful modeling capabilities and have been successfully applied in natural language processing, image recognition, and other fields. On the other hand, traditional CNN can only deal with Euclidean spatial data. In contrast, many real-life scenarios, such as transportation networks, social networks, reference networks, and so on, exist in graph data. The creation of graph convolution operators and graph pooling is at the heart of migrating CNN to graph data analysis and processing. With the advancement of the Internet and technology, graph convolution network (GCN), as an innovative technology in artificial intelligence (AI), has received more and more attention. GCN has been widely used in different fields such as image processing, intelligent recommender system, knowledge-based graph, and other areas due to their excellent characteristics in processing non-European spatial data. At the same time, communication networks have also embraced AI technology in recent years, and AI serves as the brain of the future network and realizes the comprehensive intelligence of the future grid. Many complex communication network problems can be abstracted as graph-based optimization problems and solved by GCN, thus overcoming the limitations of traditional methods. This survey briefly describes the definition of graph-based machine learning, introduces different types of graph networks, summarizes the application of GCN in various research fields, analyzes the research status, and gives the future research direction.
In recent years, ozone (O3) has gradually become the primary pollutant plaguing urban air quality. Accurate and efficient ozone prediction is of great significance to the prevention and control of ozone pollution. The air quality monitoring network provides multisource pollutant concentration monitoring data for ozone prediction, but ozone prediction based on multisource monitoring data still faces the challenges of each station’s series of data. Aiming at the problems of low prediction accuracy and low computational efficiency in traditional atmospheric ozone concentration prediction, ozone concentration prediction using dual series decomposition was proposed by variational mode decomposition (VMD), ensemble empirical mode decomposition (EEMD), and long short-term memory (LSTM). First, the historical data series of Nanjing air quality monitoring stations is decomposed by VMD, and then the EEMD algorithm is applied to the residual of VMD to obtain several characteristic intrinsic mode function (IMF) components; each characteristic IMF component is trained by LSTM to obtain the prediction result of each component, and then the final result can be obtained by linear superposition. The proposed method achieved the best results with R2 = 99%, MSE = 5.38, MAE = 4.54, and MAPE = 3.12. Because LSTM has strong adaptive learning ability and good memory function, it has the learning advantage of long-term memory for long-term data, and the prediction results are more accurate. According to the data, the proposed method is superior to the baseline models in terms of statistical metrics. As a result, the proposed hybrid method can serve as a reliable model for ozone forecasting.
The technology of visual servoing, with the digital twin as its driving force, holds great promise and advantages for enhancing the flexibility and efficiency of smart manufacturing assembly and dispensing applications. The effective deployment of visual servoing is contingent upon the robust and accurate estimation of the vision-motion correlation. Network-based methodologies are frequently employed in visual servoing to approximate the mapping between 2D image feature errors and 3D velocities, offering promising avenues for improving the accuracy and reliability of visual servoing systems. These developments have the potential to fully leverage the capabilities of digital twin technology in the realm of smart manufacturing. However, obtaining sufficient training data for these methods is challenging, and thus improving model generalization to reduce data requirements is imperative. To address this issue, we offer a learning-based approach for estimating Jacobian matrices of visual servoing that organically combines an extreme learning machine (ELM) and a differential evolutionary algorithm (DE). In the first stage, the pseudoinverse of the image Jacobian matrix is approximated using the ELM, which solves the problems associated with traditional visual servoing and is resistant to outside influences such as image noise and mistakes in camera calibration. In the second stage, differential evolution is utilized to select input weights and hidden layer bias and to determine ELM’s output weights. Experimental results conducted on a digital twin operating platform for 4-DOF robot with an eye-in-hand configuration demonstrate better performance than classical visual servoing and traditional ELM-based visual servoing in various cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.