2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00220
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing Deep Similarity Networks

Abstract: For convolutional neural network models that optimize an image embedding, we propose a method to highlight the regions of images that contribute most to pairwise similarity. This work is a corollary to the visualization tools developed for classification networks, but applicable to the problem domains better suited to similarity learning. The visualization shows how similarity networks that are finetuned learn to focus on different features. We also generalize our approach to embedding networks that use differ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(28 citation statements)
references
References 21 publications
0
28
0
Order By: Relevance
“…However, we first needed to ascertain whether the difference images contained any useful information at all. Therefore, we used a similarity visualization technique [26] to visualize whether the difference image retained any useful information. Figure 7 depicts the visualization in terms of heatmaps (saliency maps) which shows that the difference images contain discriminative information which can be harnessed for decoding the second subject.…”
Section: Results From Experimentsmentioning
confidence: 99%
“…However, we first needed to ascertain whether the difference images contained any useful information at all. Therefore, we used a similarity visualization technique [26] to visualize whether the difference image retained any useful information. Figure 7 depicts the visualization in terms of heatmaps (saliency maps) which shows that the difference images contain discriminative information which can be harnessed for decoding the second subject.…”
Section: Results From Experimentsmentioning
confidence: 99%
“…GradCAM [43] and CNN Fixation [28] are two popular methods to visualize CNNs, however, they are not designed for such pairwise networks as our ComplexIris-Net. The works in [45], [57] have proposed to decompose the last convolution activation to highlight the image regions that contribute the most to the overall matching score. We employ these frameworks to visualize and compare the last convolution activation of both complex-valued and realvalued iris networks.…”
Section: Visualizationmentioning
confidence: 99%
“…Each diagram shows a different XAI use case (clockwise, starting from the lower left): XAI as preprocessing (eg, explaining the data), XAI as part of the system (eg, inherently interpretable models), XAI as post-hoc explanation (eg, visual saliency maps), and XAI as a combination of all the previous methods been an increasing push to create explanations for other image understanding tasks, including object detection 17 and image similarity. [18][19][20][21][22] A notional architecture diagram for an analytics Domain Implementation is shown in Figure 6, here supporting saliency maps as an example. On the client side (right), the framework provides hooks for configuration, supplying lists of images, receiving lists of saliency maps, etc.…”
Section: Analytics Domain Implementationmentioning
confidence: 99%