2022
DOI: 10.1177/03611981221095087
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Model for Crash Injury Severity Analysis Using Shapley Additive Explanation Values

Abstract: Analysis of traffic crash and associated data provides insights and assists with identification of cause-and-effect relationships with crash probabilities and outcomes. This study utilized eight years of police-reported Nebraska crash data using a deep neural network (DNN) to model crash injury severity outcomes. Prediction performances and model interpretability were examined. The developed DNN excelled in prediction accuracy, precision, and recall but was computationally intensive compared with a baseline mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…ML and DL methods -Higher predictive accuracy: ML and DL methods can achieve superior predictive accuracy compared to statistical models, particularly when handling complex and non-linear relationships between variables or when dealing with large and complex datasets (Kang & Khattak, 2022;Komol et al, 2021).…”
Section: Literature Reviewmentioning
confidence: 99%
See 3 more Smart Citations
“…ML and DL methods -Higher predictive accuracy: ML and DL methods can achieve superior predictive accuracy compared to statistical models, particularly when handling complex and non-linear relationships between variables or when dealing with large and complex datasets (Kang & Khattak, 2022;Komol et al, 2021).…”
Section: Literature Reviewmentioning
confidence: 99%
“…-Computational resources: DL models typically demand more computational resources and longer training times in comparison to statistical and ML models (Kang & Khattak, 2022).…”
Section: Literature Reviewmentioning
confidence: 99%
See 2 more Smart Citations
“…SHAP improves the interpretability by estimating the positive or negative relationship for each factor with the dependent variable from the outcome of the model such as XGBoost. This technique has been used extensively in traffic safety studies, including crash injury prediction, to better interpret risk factors ( 45 , 46 ). In recent years, SHAP has also been used to estimate and interpret transportation mode preferences from a variety of data sources, including smart cards and travel surveys ( 47 , 48 ).…”
Section: Literature Reviewmentioning
confidence: 99%