2023
DOI: 10.1109/tcc.2022.3160228
|View full text |Cite
|
Sign up to set email alerts
|

Cloud Workload Turning Points Prediction via Cloud Feature-Enhanced Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…Correspondingly, the five exclusive workload prediction classes are designated as Evolutionary Learning, Deep Learning, Hybrid Learning, Ensemble Learning, and Quantum Learning. In Evolutionary learning class, the candidate approaches: ANN+SADE [25], ANN-BADE [26], ANN-BHO [27], SDWF [27], and FLGAPSONN [28] [30], Bi-LSTM [31], Crystal ILP [32], and FEMT-LSTM [33] which are pre-dominantly based on functionality of LSTM models. Autoencoder sub-category consists of Encoder+LSTM [34], CP Autoencoder [35], LPAW Autoencoder [36], and GRUED [37] are derived by applying some useful modification in the traditional autoencoders.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Correspondingly, the five exclusive workload prediction classes are designated as Evolutionary Learning, Deep Learning, Hybrid Learning, Ensemble Learning, and Quantum Learning. In Evolutionary learning class, the candidate approaches: ANN+SADE [25], ANN-BADE [26], ANN-BHO [27], SDWF [27], and FLGAPSONN [28] [30], Bi-LSTM [31], Crystal ILP [32], and FEMT-LSTM [33] which are pre-dominantly based on functionality of LSTM models. Autoencoder sub-category consists of Encoder+LSTM [34], CP Autoencoder [35], LPAW Autoencoder [36], and GRUED [37] are derived by applying some useful modification in the traditional autoencoders.…”
Section: Methodsmentioning
confidence: 99%
“…The performance is measured as predicted output is achieved for cloud resource management. 2D LSTM [30] Bi-LSTM [31] Crystal ILP [32] FEMT-LSTM [33] Encoder+ LSTM [34] CP Autoen -coder [35] LPAW Autoe -ncoder [36] GRUED [37] DBN+R BN [38] DBN+O ED [39] DP-CU PA [40] es-DNN [41] DNN+ MVM [42] DNN-PPE [43] SG-LSTM [44] ADRL [45] Bi-Hyp -rec [46] BG-LSTM [47] HPF-DNN [48] FAHP [28] ACPS [49] LSRU [50] KSE+WMC [51] FAST [52] SGW-S [19] ClIn [53] AMS [54] E-ELM [55] SF-Cluster [48] EQNN [56] Fig. 3: Classification and Taxonomy of Machine learning based Workload Prediction Models…”
Section: Workload Prediction Operational Flowmentioning
confidence: 99%
See 1 more Smart Citation
“…Then it computes a resource allocation for the container u C by using the error and M so that, ideally, τ C = τ • C . Each controller on a machine m transfers to the Supervisor the computed u C that aggregates all the allocations in a vector 3 Ūm , and, if needed, computes a feasible resource allocation u C for each container according to a specified policy (e.g., proportional, priority-based, requirement-based). Moreover, if the sum of to-be-allocated resources is lower than the capacity of the machine, the supervisor can (optionally) scale up the allocations to speed up applications' performance at the expense of a sub-optimal allocation (over-provisioning).…”
Section: Control Architecturementioning
confidence: 99%
“…Software systems are increasingly sophisticated, and they frequently need to handle a wide range of dynamic workloads while maintaining a set service quality [1], [2]. As a result, system scalability is extremely important, and computational resources should be allocated as needed [3], [4], [5]. Provisioned resources should, ideally, match the intensity of dynamic workloads to be served and avoid both underprovisioning (i.e., resources are not enough to handle the workload) and overprovisioning (i.e., resources are more than needed) scenarios [6], [7].…”
Section: Introductionmentioning
confidence: 99%