2021
DOI: 10.1109/access.2021.3054613
|View full text |Cite
|
Sign up to set email alerts
|

Explainability Metrics of Deep Convolutional Networks for Photoplethysmography Quality Assessment

Abstract: Photoplethysmography (PPG) is a noninvasive way to monitor various aspects of the circulatory system, and is becoming more and more widespread in biomedical processing. Recently, deep learning methods for analyzing PPG have also become prevalent, achieving state of the art results on heart rate estimation, atrial fibrillation detection, and motion artifact identification. Consequently, a need for interpretable deep learning has arisen within the field of biomedical signal processing. In this paper, we pioneer … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 24 publications
0
16
0
Order By: Relevance
“…However, this simple labeling ignores the minimum level of morphological quality completely. Both Machine learning and deep-learning techniques [17,18,22,42,43] are employed for data cleaning or classifying signals as acceptable or anomalous. However, most of these approaches rely on manual data labeling based on experts' annotation.…”
Section: Related Workmentioning
confidence: 99%
“…However, this simple labeling ignores the minimum level of morphological quality completely. Both Machine learning and deep-learning techniques [17,18,22,42,43] are employed for data cleaning or classifying signals as acceptable or anomalous. However, most of these approaches rely on manual data labeling based on experts' annotation.…”
Section: Related Workmentioning
confidence: 99%
“…These models assist in estimation of highly variant features, thereby improving overall classification performance, without compromising on delay & scalability performance. Performance of these models must be validated under different datasets, and can be extended via use of Dense Convolutional Neural Network [17], fuzzy logic-based classification [18], Deep Multiple instance Learning via Multiple Modalities [19], ensemble classification models [20], semi supervised Multitask Learning (SSMTL) [21], and Variance Maximized Deep Networks [22], which assist in improving feature variance for better signal representation under different datasets. Specialized approaches that utilize Deep Learning Models for Keratoconus and Sub-Clinical Keratoconus Detection [23], Multivariate Analysis using CNNs & Decision Trees [24], Learning from Label Fuzzy Proportions (LLFP) [25], and Gaussian mixture models (GMM) with sensor CNN (SCNN) [26] are discussed which assist in improving correlative feature mapping between clinical & test features in order to improve overall classification performance.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The QoT contains a physical and emotional trust, representing the objective and subjective assessment of explainability, respectively. For analysing photoplethysmography using deep learning methods, Zhang et al [267] proposed two metrics (i.e., congruence, annotations classifications) to measure the quality of XAI explanations compared to the human experts' explanations. Similarly, Kaur et al [268] proposed another metric called "Trustworthy Explainability Acceptance" that measures the Euclidean distance between XAI explanations and domain experts' reasonings in predicting Ductal Carcinoma in Situ (DCIS) recurrence using AI.…”
Section: B Challenges Of Xai For 6gmentioning
confidence: 99%