2020 22nd International Conference on Transparent Optical Networks (ICTON) 2020
DOI: 10.1109/icton51198.2020.9203040
|View full text |Cite
|
Sign up to set email alerts
|

Short-Term Traffic Forecasting in Optical Network using Linear Discriminant Analysis Machine Learning Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 7 publications
1
9
0
Order By: Relevance
“…All experiments were conducted using datasets, which were generated based on real traffic characteristics. The results reported in the next sections prove that the proposed approaches outperform the methods described in [16].…”
Section: Related Worksupporting
confidence: 58%
See 2 more Smart Citations
“…All experiments were conducted using datasets, which were generated based on real traffic characteristics. The results reported in the next sections prove that the proposed approaches outperform the methods described in [16].…”
Section: Related Worksupporting
confidence: 58%
“…This work is a continuation and extension of our recent paper [16]. The contribution of this paper is threefold.…”
Section: Introductionmentioning
confidence: 65%
See 1 more Smart Citation
“…We used a 10-fold cross-validation training strategy, and each model was trained one hundred times, separately. First, we train ten classical machine learning models using LightGBM Classifier (LGBM) [ 50 ], Gradient Boosting Classifier (GBC) [ 51 ], XGBoost Classifier (XGB) [ 52 ], Extra Tree Classifier (ETC) [ 53 ], k Neighbors Classifier (KNN) [ 54 ], Decision Tree Classifier (DT) [ 55 ], Random Forest Classifier (RF) [ 56 ], Linear Discriminant Analysis (LDA) [ 57 ], Support Vector Classifier (SVC) [ 58 ], and Logistic Regression (LR) [ 59 ]. Then, we used the three models with the highest average accuracy for stacking.…”
Section: Materials and Methodsmentioning
confidence: 99%
“…In order to obtain an accurate assessment of the performance of the model, we carried out the stratified 10-fold cross-validation 100 times. In the beginning, traditional machine learning modeling was carried out with the assistance of the LightGBM Classifier (LGBM) [ 57 ], Gradient Boosting Classifier (GBC) [ 58 ], XGBoost Classifier (XGB) [ 59 ], Extra Tree Classifier (ETC) [ 60 ], Decision Tree Classifier (DT) [ 61 ], Random Forest Classifier (RF) [ 62 ], Linear Discriminant Analysis (LDA) [ 63 ], and Logistic Regression (LR) [ 64 ]. Following this, the three models that had the best overall effectiveness within these records were chosen for stacking modeling.…”
Section: Materials and Methodsmentioning
confidence: 99%