2022
DOI: 10.3390/app12031007
|View full text |Cite
|
Sign up to set email alerts
|

Research on the Feasibility of Applying GRU and Attention Mechanism Combined with Technical Indicators in Stock Trading Strategies

Abstract: The vigorous development of Time Series Neural Network in recent years has brought many potential possibilities to the application of financial technology. This research proposes a stock trend prediction model that combines Gate Recurrent Unit and Attention mechanism. In the proposed framework, the model takes the daily opening price, closing price, highest price, lowest price and trading volume of stocks as input, and uses technical indicator transition prediction as a label to predict the possible rise and f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…ChwðCþ K 2 ÞÞ. Since the input to the GRU is a sequence of m frames representing a given video, the time complexity of a single GRU is Oðmd 2 h þ md h d i Þ where d h and d i represent dimensions of hidden state and input, respectively [71,72]. The GRUs sequence has a time complexity…”
Section: Time Complexity Analysis Of the Proposed Methodsmentioning
confidence: 99%
“…ChwðCþ K 2 ÞÞ. Since the input to the GRU is a sequence of m frames representing a given video, the time complexity of a single GRU is Oðmd 2 h þ md h d i Þ where d h and d i represent dimensions of hidden state and input, respectively [71,72]. The GRUs sequence has a time complexity…”
Section: Time Complexity Analysis Of the Proposed Methodsmentioning
confidence: 99%
“…It is designed to capture long-range dependencies and patterns in sequences, making it suitable for tasks like stock forecasting. GRU was specifically designed as a way to address some of the limitations of traditional RNNs, such as the vanishing gradient problem [18] [19]. GRU has a more simple architecture than LSTM, and is equally as viable when it comes to being used for stock predictions.…”
Section: Grumentioning
confidence: 99%
“…GRU has a more simple architecture than LSTM, and is equally as viable when it comes to being used for stock predictions. This is due to the fact that it is capable of yielding strong results [18] as well as outperforming other high-achieving models [19]. This is, in part, due to its fast training speed and efficiency at capturing short-term dependencies within sequences.…”
Section: Grumentioning
confidence: 99%
“…The Informer algorithm uses the ProbSparse Self-attention mechanism in the process of encoding and decoding, and only considers the part that contributes the most to the attention mechanism. Compared with LSTM [11] and Transformer [12], the calculation amount is smaller and the memory usage is low.…”
Section: Informer Model For Long-term Stock Price Predictionmentioning
confidence: 99%