Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022
DOI: 10.1145/3534678.3539419
|View full text |Cite
|
Sign up to set email alerts
|

RES: A Robust Framework for Guiding Visual Explanation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 19 publications
0
10
0
Order By: Relevance
“…A natural idea is to exploit these nodule contour annotations to improve the training effect. This can be achieved by using explanation supervision technique, which incorporates explanation annotations (i.e., these nodule contour annotations) into supervision signal, improving both classification performance and inference attribution 34 . However, to our knowledge, no one has applied explanation supervision to clinical diagnosis, leaving the wealth of information in explanation annotations untapped.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A natural idea is to exploit these nodule contour annotations to improve the training effect. This can be achieved by using explanation supervision technique, which incorporates explanation annotations (i.e., these nodule contour annotations) into supervision signal, improving both classification performance and inference attribution 34 . However, to our knowledge, no one has applied explanation supervision to clinical diagnosis, leaving the wealth of information in explanation annotations untapped.…”
Section: Methodsmentioning
confidence: 99%
“…This can be achieved by using explanation supervision technique, which incorporates explanation annotations (i.e., these nodule contour annotations) into supervision signal, improving both classification performance and inference attribution. 34 However, to our knowledge, no one has applied explanation supervision to clinical diagnosis, leaving the wealth of information in explanation annotations untapped. RES is a framework that utilizes explanation annotations to improve both the performance and the explanation quality of backbone DNN.…”
Section: Res For Pulmonary Nodule Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) Explainable AI Framework Illustrated in Figure 1 is the architecture of the proposed explainable lung nodule detection (ELND) framework, leveraging the visual explanation approach encapsulated in the previous study [29]. The framework integrates a convolutional neural network (CNN) model, ResNet18 [30], that can be trained to utilize the explanation loss function, incorporating a Gaussian kernel function.…”
Section: Methods (1) Patient Data and Data Preprocessingmentioning
confidence: 99%
“…Saliency detection is to identify the most important and informative part of input features. It has been applied to various domains including CV [13,16], NLP [23,32], etc. The salience map approach is exemplified by [45] to test a network with portions of the input occluded to create a map showing which parts of the data actually have an influence on the network output.…”
Section: Related Workmentioning
confidence: 99%