2019
DOI: 10.1016/j.future.2019.05.060
|View full text |Cite
|
Sign up to set email alerts
|

Neural network architecture based on gradient boosting for IoT traffic prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
63
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 45 publications
(65 citation statements)
references
References 46 publications
1
63
0
1
Order By: Relevance
“…The naïve Bayes algorithm fails to classify the incoming traffic when the network traffic is high. Martin et al [19] proposed a deep learning-based neural network architecture for IoT traffic prediction. This architecture contains residual, boosted, and stacked networks.…”
Section: Traffic Differentiation Schemesmentioning
confidence: 99%
“…The naïve Bayes algorithm fails to classify the incoming traffic when the network traffic is high. Martin et al [19] proposed a deep learning-based neural network architecture for IoT traffic prediction. This architecture contains residual, boosted, and stacked networks.…”
Section: Traffic Differentiation Schemesmentioning
confidence: 99%
“…Finally, the information to consider from the previous time-slots (predictors) can be extended to the load values and additional information, such as date/time or weather data (multivariate forecast). Including this additional (exogenous) information as new features imposes difficulties on statistical analysis methods since only a few can cope with multivariate forecasts [ 9 ]. This creates additional opportunities for machine learning and deep learning (ML/DL) techniques that easily handle vector-valued predictors.…”
Section: Introductionmentioning
confidence: 99%
“…The only requirement for a base model is to be trainable end-to-end by gradient descent and support the addition of a final layer in both the training and prediction stages. Thus, we have considered as base models several configurations of 1D and 2D convolutional neural networks (CNN) [ 13 , 14 ], long short-term memory (LSTM) [ 15 ] networks and their combination, as well as several additive ensembles (AE) deep learning models especially suitable for time-series forecasting [ 9 , 16 ]. We do not include sequence-to-sequence (Seq2seq) models as a base model since the forward pass for the training, and test stages are different with added complexity for the proposed extension.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It has been shown through various studies [8]- [11] that the accuracy and stability of the model can be improved by combining the results of the predictive models. Boosting, proposed by Freund [12], improves performance by combining multiple weak models (i.e., submodels) into a strong model unlike bagging [13], [14]. The principal idea of boosting is to generate models sequentially and to improve overall model performance thereby.…”
Section: Introductionmentioning
confidence: 99%