2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00328
|View full text |Cite
|
Sign up to set email alerts
|

What do Deep Networks Like to See?

Abstract: We propose a novel way to measure and understand convolutional neural networks by quantifying the amount of input signal they let in. To do this, an autoencoder (AE) was fine-tuned on gradients from a pre-trained classifier with fixed parameters. We compared the reconstructed samples from AEs that were fine-tuned on a set of image classifiers (AlexNet, VGG16, ResNet-50, and Inception v3) and found substantial differences. The AE learns which aspects of the input space to preserve and which ones to ignore, base… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
28
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 31 publications
(30 citation statements)
references
References 27 publications
2
28
0
Order By: Relevance
“…Finally, we compute the attributions from the trained auto-encoder (step 3) followed by the sanity check using our suppression test (step 4). We will first cover some basic background and then dive into the formulation of the problem presented by Palacio et al [11]. We will then present the proposed formulation, adapting the basic one for the interpretability of deep learning-based time series models.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Finally, we compute the attributions from the trained auto-encoder (step 3) followed by the sanity check using our suppression test (step 4). We will first cover some basic background and then dive into the formulation of the problem presented by Palacio et al [11]. We will then present the proposed formulation, adapting the basic one for the interpretability of deep learning-based time series models.…”
Section: Methodsmentioning
confidence: 99%
“…The first stream for explainable systems, which attempts to explain pretrained models using attribution techniques, has been a major focus of research in the few past years. The most common strategy is to visualize the filters of the deep model [11,[13][14][15][16]. This is very effective for visual modalities since images are directly intelligible for humans.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…al. [44], where pre-training an Encoder-Decoder system at the input layer produced more robust results which were less prone to adversarial attacks in image classification tasks.…”
Section: Ablation Studiesmentioning
confidence: 99%