2022
DOI: 10.1111/ina.12984
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability analysis for thermal sensation machine learning models: An exploration based on the SHAP approach

Abstract: Machine learning models have been widely used for studying thermal sensations.However, the black-box properties of machine learning models lead to the lack of model transparency, and existing explanations for the thermal sensation models are generally flawed in terms of the perspectives of interpretable methods. In this study, we perform an interpretability analysis using the "SHapley Additive exPlanation" (SHAP) from game theory for thermal sensation machine learning models. The effects of different features … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 48 publications
(21 citation statements)
references
References 52 publications
(183 reference statements)
1
20
0
Order By: Relevance
“…The risk prediction model for detection of early sepsis using the transcriptome data-based method and the CatBoost method showed good performance in terms of model validity and clinical net benefit. However, the black-box properties of machine learning models result in a lack of model transparency, and existing explanations for the models are flawed in terms of the interpretation of the methods ( Yang et al, 2022 ).…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…The risk prediction model for detection of early sepsis using the transcriptome data-based method and the CatBoost method showed good performance in terms of model validity and clinical net benefit. However, the black-box properties of machine learning models result in a lack of model transparency, and existing explanations for the models are flawed in terms of the interpretation of the methods ( Yang et al, 2022 ).…”
Section: Resultsmentioning
confidence: 99%
“…The KNN algorithm ( Bania and Halder, 2020 ) was used to fill in the missing data. The preprocessed data set was divided into training and test sets in a ratio of 2:1 (274 for the training set and 118 for the test set) ( Fabian et al, 2011 ; Yang et al, 2022 ).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations