2019
DOI: 10.1109/tfuzz.2019.2946520
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Deep Convolutional Fuzzy Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 35 publications
(14 citation statements)
references
References 49 publications
1
13
0
Order By: Relevance
“…In addition, the authors of (Deng et al 2017 ) presented an FDNN for classification tasks, such as natural scene image categorization and stock trend prediction. Similar studies for image classification can be found in (Guan et al 2020 ; Kunchala et al 2020 ; Liu et al 2020a , b ; Liu et al 2020a , b ; Manchanda et al 2020 ; Tianyu and Xu 2020 ; Yeganejou and Dick 2018 , 2019 ; Yeganejou et al 2020 ; Zhang et al 2020a , b , c ).…”
Section: Analysis and Synthesis Of Datasupporting
confidence: 69%
See 2 more Smart Citations
“…In addition, the authors of (Deng et al 2017 ) presented an FDNN for classification tasks, such as natural scene image categorization and stock trend prediction. Similar studies for image classification can be found in (Guan et al 2020 ; Kunchala et al 2020 ; Liu et al 2020a , b ; Liu et al 2020a , b ; Manchanda et al 2020 ; Tianyu and Xu 2020 ; Yeganejou and Dick 2018 , 2019 ; Yeganejou et al 2020 ; Zhang et al 2020a , b , c ).…”
Section: Analysis and Synthesis Of Datasupporting
confidence: 69%
“…A simple example of a cooperative DNFS was proposed by (Yeganejou et al 2020 ) using a CNN for feature extraction and by transferring the outputs of the final convolutional layer for fuzzy classification, as depicted in Fig. 12 .…”
Section: Analysis and Synthesis Of Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed, in many situations we do not need to trade accuracy for interpretability, as we can construct interpretable ML models that match or surpass accuracy of black-box models [8]. Fuzzy XAI has recently gained popularity in various applications [10]. Especially in safety critical applications, interpretable architectures are preferred over non-interpretable structures [5].…”
Section: Introductionmentioning
confidence: 99%
“…Some models visualized feature maps and filters in different layers like in NIN [48], SAD/FAD [53], CAR [66]. Other models visualized class activation maps to evaluate the class discrimination like Dynamic-K [52], XCNN [59], ProtoPNet [56], FBI [54], FCM [61], Loss Attention [45], SENN [63], Subnetwork Extraction [65], CAM [77], Grad-CAM [18], Grad-CAM++ [78], Smooth Grad-CAM++ [79], Augmented Grad-CAM [81], Score-CAM [22], IBD [84], Hpnet [87], U-CAM [82]. Additionally, saliency maps were visualized to reconstruct the input image based on the pixels/features influence on the CNN decision like in Integrated Gradients [72], FER-CNN [74], LRP [71], DeepLIFT [73], Saliency Maps [75], Deconv [76],…”
Section: Visualizationmentioning
confidence: 99%