2022
DOI: 10.18280/ts.390529
|View full text |Cite
|
Sign up to set email alerts
|

HayCAM: A Novel Visual Explanation for Deep Convolutional Neural Networks

Abstract: Explaining the decision mechanism of Deep Convolutional Neural Networks (CNNs) is a new and challenging area because of the “Black Box” nature of CNN's. Class Activation Mapping (CAM) as a visual explainable method is used to highlight important regions of input images by using classification gradients. The lack of the current methods is to use all of the filters in the last convolutional layer which causes scattered and unfocused activation mapping. HayCAM as a novel visualization method provides better activ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 50 publications
0
2
0
Order By: Relevance
“…The value of "n" can be chosen manually, as is the case in HayCAM. For further information, please refer to the study of Örnek and Ceylan [21].…”
Section: Calculating the Number Of Filters Dynamicallymentioning
confidence: 99%
See 1 more Smart Citation
“…The value of "n" can be chosen manually, as is the case in HayCAM. For further information, please refer to the study of Örnek and Ceylan [21].…”
Section: Calculating the Number Of Filters Dynamicallymentioning
confidence: 99%
“…In our previous work, we proposed HayCAM [21] as a visual XAI method and compared it to other well-known methods such as GradCAM [22], EigenCAM [23], and GradCAM++ [24]. Our primary contribution was to reduce the last layer of the deep model during visualization to ignore irrelevant filters and obtain a more focused activation map.…”
Section: Introductionmentioning
confidence: 99%