2021
DOI: 10.3390/s21217125
|View full text |Cite
|
Sign up to set email alerts
|

Using Explainable Machine Learning to Improve Intensive Care Unit Alarm Systems

Abstract: Due to the continuous monitoring process of critical patients, Intensive Care Units (ICU) generate large amounts of data, which are difficult for healthcare personnel to analyze manually, especially in overloaded situations such as those present during the COVID-19 pandemic. Therefore, the automatic analysis of these data has many practical applications in patient monitoring, including the optimization of alarm systems for alerting healthcare personnel. In this paper, explainable machine learning techniques ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 18 publications
0
13
0
Order By: Relevance
“…Furthermore, explainable machine learning techniques could be employed [ 29 ] using SHAP analysis to identify where a SHAP value of features affects the prediction models, namely if a feature affects having MACE or not. Such a SHAP values plot can further show the positive and negative relationships of the variables with the target variable (1 or 0 predictions of the classifier, namely having a MACE or not, respectively), in other words, information about the way a variable is contributing to predict MACE.…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, explainable machine learning techniques could be employed [ 29 ] using SHAP analysis to identify where a SHAP value of features affects the prediction models, namely if a feature affects having MACE or not. Such a SHAP values plot can further show the positive and negative relationships of the variables with the target variable (1 or 0 predictions of the classifier, namely having a MACE or not, respectively), in other words, information about the way a variable is contributing to predict MACE.…”
Section: Resultsmentioning
confidence: 99%
“…That poses an intriguing question: how can you trust a model’s decisions if you cannot fully justify how it got there? There has been the latest trend in the growth of XAI for a better understanding of the AI black boxes [ 49 , 136 , 137 , 138 , 139 ]. Grad-CAM or Grad-CAM++ produces a coarse clustering map showing the key regions in the picture for predicting any target idea (say, “COVID-19” in a classification network) by using the gradients of any target concept (say, “COVID-19” in a classification network) in the final convolutional layer.…”
Section: Discussionmentioning
confidence: 99%
“…XGBoost stands out for its ability to obtain the best results in different benchmarks, and is one of the best-optimized algorithms for computing parallelization, which makes it one of the most used in recent biomedical works [9]- [11]. In addition, it has support for GPUs, which allows the capacity of the algorithm to be fully exploited.…”
Section: B Estimation Model: Xgboostmentioning
confidence: 99%