2023
DOI: 10.1109/access.2023.3287975
|View full text |Cite
|
Sign up to set email alerts
|

A Global Modeling Pruning Ensemble Stacking With Deep Learning and Neural Network Meta-Learner for Passenger Train Delay Prediction

Veronica A. Boateng,
Bo Yang

Abstract: Train Operators can improve railway passengers' service quality and traffic management by accurately predicting travel arrangements and delays. Precise prediction of train delays is vital for creating feasible scheduled timetables. The import of pruning stacked ensemble deep neural networks into train delay prediction helps improve model prediction accuracy and computational time. In this study, we propose a novel pruning stacked ensemble learning model that uses pruned multilayer perceptron (MLP) neural netwo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 33 publications
(34 reference statements)
0
1
0
Order By: Relevance
“…Model simplification and compression methods have been proposed to reduce the computational requirement of deep learning object detection models and increase detection speed, while maintaining high detection accuracy. Available model simplification and compression techniques include tensor decomposition [10][11][12][13], network pruning [14][15][16][17][18][19], knowledge distillation [20][21][22][23], and neural architecture search (NAS) [24][25][26][27]. Tensor decomposition techniques such as low-rank matrix decomposition and tensorized decomposition simplify complex models by reducing a weight matrix or high-dimensional tensor to multiple low-rank matrices or low-dimensional tensors respectively [28].…”
Section: Model Simplificationmentioning
confidence: 99%
“…Model simplification and compression methods have been proposed to reduce the computational requirement of deep learning object detection models and increase detection speed, while maintaining high detection accuracy. Available model simplification and compression techniques include tensor decomposition [10][11][12][13], network pruning [14][15][16][17][18][19], knowledge distillation [20][21][22][23], and neural architecture search (NAS) [24][25][26][27]. Tensor decomposition techniques such as low-rank matrix decomposition and tensorized decomposition simplify complex models by reducing a weight matrix or high-dimensional tensor to multiple low-rank matrices or low-dimensional tensors respectively [28].…”
Section: Model Simplificationmentioning
confidence: 99%