2020
DOI: 10.1007/s00521-020-05532-z
|View full text |Cite
|
Sign up to set email alerts
|

A CNN-BiLSTM-AM method for stock price prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
152
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 336 publications
(154 citation statements)
references
References 26 publications
0
152
0
2
Order By: Relevance
“…Das et al [16] discourses how Twitter data can help in decision making such as stock market prediction to forecast prices of a company's stock by using RNNs Long Short-Term Memory (LSTM) which aided in extracting online stock data from various websites to predict future stock prices. Lu et al [17] researched using CNN-BiLSTM-AM to predict the closing price of the stock for the next day. It involved neural networks (CNN), bi-directional long short-term Memory (BiLSTM) and attention mechanism (AM), wherein CNN was used to obtained features of the input data, BiLSTM uses the obtained feature data to predict the closing price of the next day.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Das et al [16] discourses how Twitter data can help in decision making such as stock market prediction to forecast prices of a company's stock by using RNNs Long Short-Term Memory (LSTM) which aided in extracting online stock data from various websites to predict future stock prices. Lu et al [17] researched using CNN-BiLSTM-AM to predict the closing price of the stock for the next day. It involved neural networks (CNN), bi-directional long short-term Memory (BiLSTM) and attention mechanism (AM), wherein CNN was used to obtained features of the input data, BiLSTM uses the obtained feature data to predict the closing price of the next day.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The memory cell can control the transfer of information to the next moment. The forget gate is mainly to selectively forget the input data from the previous node [32]. The input gate is a selective memory of the input data at this stage, with more records for the important part and fewer records for the non-important part.…”
Section: Lstmmentioning
confidence: 99%
“…where Y t + T is the predicted object in next T hours, f represents the final model learnt by the historical data, X t denotes the datasets at the predicting moments, and X t -T are the datasets in T hours before the predicting moments. [20,21], with smaller error and higher prediction accuracy. At present, the most widely used LSTM network is to use the LSTM unit to replace the neural nodes in the hidden layer of RNN [22].…”
Section: Problem Definitionsmentioning
confidence: 99%