2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00737
|View full text |Cite
|
Sign up to set email alerts
|

Causal Transportability for Visual Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…image backdrop or contextual scene) that an image is placed in. This is a pernicious cause of disparity in algorithm performance and is widely investigated as an out-of-distribution generalisation problem in computer vision [24]. For the specific computer vision problem of facial affect recognition, the biggest dichotomy is between that of a "lab-controlled" and an "in-the-wild" dataset.…”
Section: Contextual Biasmentioning
confidence: 99%
“…image backdrop or contextual scene) that an image is placed in. This is a pernicious cause of disparity in algorithm performance and is widely investigated as an out-of-distribution generalisation problem in computer vision [24]. For the specific computer vision problem of facial affect recognition, the biggest dichotomy is between that of a "lab-controlled" and an "in-the-wild" dataset.…”
Section: Contextual Biasmentioning
confidence: 99%
“…The spurious label of each training example (e.g., whether this example contains the spurious feature) is either provided (Sagawa et al, 2019;Izmailov et al, 2022) or inferred by training a reference model (Nam et al, 2020;Creager et al, 2021;Liu et al, 2021;Nam et al, 2022) until it learns the spurious correlations. Other approaches indirectly estimate and use the causal effect of hidden non-labeled spurious attributes in pre-training (Mao et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…In our experiments, we used GradCAM (Selvaraju et al, 2017) for the explanation maps. While GradCAM explanations may not be perfectly aligned with the model's attention, their usage has shown practical benefits for model debugging (Yosinski et al, 2016;Simonyan et al, 2014;Mao et al, 2022).…”
Section: Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…(Zhang et al 2022a) assigned high confidence to image regions consistent with the global semantics in aggregating. (Pan, Wu, and Zhang 2023) proposed to eliminate redundant or irrelevant fragment alignments from the perspective of information coding. In general, not all fragments contribute to image-text relevance, and a large branch of existing methods is devoted to mining the vital ones to measure the relevance accurately.…”
Section: Introductionmentioning
confidence: 99%