NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium 2018
DOI: 10.1109/noms.2018.8406290
|View full text |Cite
|
Sign up to set email alerts
|

Applying gated recurrent units pproaches for workload prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 13 publications
0
12
0
Order By: Relevance
“…9(b) to (f), the TSA for workload compression is effective under proper settings of top hidden units, which is able to provide an effective feature representation that can greatly reduce the computational complexity of our proposed method for cloud workload prediction. Based on the Google datasets and the preprocessed results of workload compression by using the TSA, we evaluate the proposed L-PAW and other recent RNN-based methods for workload prediction, including recurrent neural network (RNN) [22], long short-term memory (LSTM) [27], gated recurrent unit (GRU) [30], and echo state networks (ESN) [28]. We compare both the prediction accuracy and learning efficiency among these methods, measured by MSE and the average training time, respectively.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…9(b) to (f), the TSA for workload compression is effective under proper settings of top hidden units, which is able to provide an effective feature representation that can greatly reduce the computational complexity of our proposed method for cloud workload prediction. Based on the Google datasets and the preprocessed results of workload compression by using the TSA, we evaluate the proposed L-PAW and other recent RNN-based methods for workload prediction, including recurrent neural network (RNN) [22], long short-term memory (LSTM) [27], gated recurrent unit (GRU) [30], and echo state networks (ESN) [28]. We compare both the prediction accuracy and learning efficiency among these methods, measured by MSE and the average training time, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…Compared with LSTM, GRU is able to converge more easily with fewer settings of parameters [12]. But there is little research work [30] using GRU-based approaches with the consideration of training efficiency of neural networks for workload prediction in the cloud environment.…”
Section: Rnn-based Approaches For Workload Predictionmentioning
confidence: 99%
“…Guo et al [26] designed and proposed an improved LSTM-based model N-LSTM to solve the problem of virtual machine workload forecasting, and it could make forecast at irregular time intervals. Guo and Yao [27] proposed a method for workload prediction based on GRU [13]. The model can learn temporal patterns and longterm dependencies of large sequences of arbitrary length, and the model training time is shorter than that of LSTM.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…GRU [27]: The basic gate recurrent unit (GRU) [13] network is a recurrent neural network using GRU cell as the calculation unit. The historical data will be input to the multilayer GRU network, and the last output value of the last layer will be taken as the forecast value.…”
Section: B Experimental Setting 1) Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation