2022
DOI: 10.1155/2022/7739087
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating Transformers and Attention Networks for Stock Movement Prediction

Abstract: Predicting stock movements is a valuable research field that can help investors earn more profits. As with time-series data, the stock market is time-dependent and the value of historical information may decrease over time. Accurate prediction can be achieved by mining valuable information with words on social platforms and further integrating it with actual stock market conditions. However, many methods still cannot effectively dig deep into hidden information, integrate text and stock prices, and ignore the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…They compared this approach with a conventional deep learning model and found that the Transformer model had superior predictive accuracy in all experiments and a better NAV profile. Li et al (2022) proposed a stock movement prediction model based on a Transformer and attention mechanism, and used social media text and stock price data including Twitter for experimental validation, by deeply extracting features of text and stock price, and using attention mechanism to obtain key information. The experimental results show that the method outperforms other baseline models in several metrics and has practical application value.…”
Section: Related Workmentioning
confidence: 99%
“…They compared this approach with a conventional deep learning model and found that the Transformer model had superior predictive accuracy in all experiments and a better NAV profile. Li et al (2022) proposed a stock movement prediction model based on a Transformer and attention mechanism, and used social media text and stock price data including Twitter for experimental validation, by deeply extracting features of text and stock price, and using attention mechanism to obtain key information. The experimental results show that the method outperforms other baseline models in several metrics and has practical application value.…”
Section: Related Workmentioning
confidence: 99%
“…The comparison models include: CNN, BiLSTM, CNN-BiLSTM, GRU, GAN, TEA and GAN-HPA models.In the GAN model, BiLSTM is selected for the generator, and CNN is selected for the discriminator.The experiments use the MAE, MSE, and RMSE as the evaluation indexes of the models, and the specific experimental values are shown in Table 7. TEA [14] 0.0231 0.0012 0.0359 GAN-HPA [20] 0.0224 0.0012 0.0348 TK-GAN 0.0200 0.0009 0.0306…”
Section: Comparison With Existing Algorithmsmentioning
confidence: 99%
“…Zhang [13] combined the empirical modal decomposition on top of the GRU model with attention mechanism to further improve the experimental results. Li et al [14] proposed a Transformer-based Attention Framework (TEA), which is a network framework for deep extraction of financial data features and further integration. It consists of a feature extractor and a cascade processor, and uses the research text and stock price information of five calendar days (calendar days in the original English) from the target trading day as the training data, which reveals the correlation between the research text and the financial data that are strongly correlated.…”
mentioning
confidence: 99%