Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is very challenging. With the availability of large annotated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architectures, we first demonstrate that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result-it further improves the performance even when added to the already very strong model.
Stock prices forecasting is a topic research in the fields of investment and national policy, which has been a challenging problem owing to the multi-noise, nonlinearity, high-frequency, and chaos of stocks. These characteristics of stocks impede most forecasting models from extracting valuable information from stocks data. Herein, a novel hybrid deep learning model integrating attention mechanism, multi-layer perceptron, and bidirectional long-short term memory neural network is proposed. First, the raw data including four types of datasets (historical prices of stocks, technical indicators of stocks closing prices, natural resources prices, and historical data of the Google index) are transformed into a knowledge base with reduced dimensions using principal component analysis. Subsequently, multi-layer perceptron is used for the fast transformation of feature space and rapid gradient descent, bidirectional long-short term memory neural network for extracting temporal features of stock time series data, and attention mechanism for making the neural network focus more on crucial temporal information by assigning higher weights. Finally, a comprehensive model evaluation method is used to compare the proposed model with seven related baseline models. After extensive experiments, the proposed model demonstrated its good forecasting performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.