Recent years have witnessed a growing interest in using machine learning to predict and identify phase transitions in various systems. Here we adopt convolutional neural networks (CNNs) to study the phase transitions of Vicsek model, solving the problem that traditional order parameters are insufficiently able to do. Within the large-scale simulations, there are four phases, and we confirm that all the phase transitions between two neighboring phases are first-order. We have successfully classified the phase by using CNNs with a high accuracy and identified the phase transition points, while traditional approaches using various order parameters fail to obtain. These results indicate the great potential of machine learning approach in understanding the complexities in collective behaviors, and in related complex systems in general.
By training a convolutional neural network (CNN) model, we successfully recognize different phases of El Niño-Southern Oscillation (ENSO). Our model achieves high recognition performance, with accuracy rates of 89.4\% for the training dataset and 86.4\% for the validation dataset. Through statistical analysis of the weight parameter distribution and activation output in the CNN, we find that most of the convolution kernels and hidden layer neurons remain inactive, while only two convolution kernels and two hidden layer neurons play active roles. By examining the connections of weight parameters between these active convolution kernels and hidden neurons, we can automatically differentiate various types of El Niño and La Niña, thereby identifying the specific functions of each part of the convolutional neural network. We anticipate this work progress is helpful for future studies on both climate prediction and a deep understanding of the artificial neural networks.
An increasing number of people tend to convey their opinions in different modalities. For the purpose of opinion mining, sentiment classification based on multimodal data becomes a major focus. In this work, we propose a novel Multimodal Interactive and Fusion Graph Convolutional Network to deal with both texts and images on the task of document-level multimodal sentiment analysis. The image caption is introduced as an auxiliary, which is aligned with the image to enhance the semantics delivery. Then, a graph is constructed with the sentences and images generated as nodes. In line with the graph learning, the long-distance dependencies can be captured while the visual noise can be filtered. Specifically, a cross-modal graph convolutional network is built for multimodal information fusion. Extensive experiments are conducted on a multimodal dataset from Yelp. Experimental results reveal that our model obtains a satisfying working performance in DLMSA tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.