2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01251
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual VQA: A Cause-Effect Look at Language Bias

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
143
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 294 publications
(143 citation statements)
references
References 33 publications
0
143
0
Order By: Relevance
“…VQA-CP [3], drawn from VQA v2 dataset [20], is the first benchmark proposed to evaluate (and reduce) questionoriented language bias in VQA models. Considerable effort [3,29,33,48,53,1] has been invested on VQA-CP along 3 dimensions: (i) compensating for question-answer distribution patterns through a regularizer based on an auxiliary model [48,8,14,67,21,29]; (ii) taking advantage of additional supervision from human-generated attention maps [53,72,17]; and (iii) synthesizing counterfactual examples to augment training set [1,10,66]. Recent work [68] shows that simple methods such as generating answers at random can already surpass state of the art on some question types.…”
Section: Robust Vqa Benchmarksmentioning
confidence: 99%
“…VQA-CP [3], drawn from VQA v2 dataset [20], is the first benchmark proposed to evaluate (and reduce) questionoriented language bias in VQA models. Considerable effort [3,29,33,48,53,1] has been invested on VQA-CP along 3 dimensions: (i) compensating for question-answer distribution patterns through a regularizer based on an auxiliary model [48,8,14,67,21,29]; (ii) taking advantage of additional supervision from human-generated attention maps [53,72,17]; and (iii) synthesizing counterfactual examples to augment training set [1,10,66]. Recent work [68] shows that simple methods such as generating answers at random can already surpass state of the art on some question types.…”
Section: Robust Vqa Benchmarksmentioning
confidence: 99%
“…Causal Recommendation. Causal inference has been widely used in many machine learning applications, spanning from computer vision [23,34], natural language processing [11,12,43], to information retrieval [4]. In recommendation, most works on causal inference [25] focus on debiasing various biases in user feedback, including position bias [18], clickbait issue [37], and popularity bias [45].…”
Section: Related Workmentioning
confidence: 99%
“…Counterfactual inference. A line of research attempts to enable deep neural networks with counterfactual thinking by incorporating counterfactual inference (Yue et al, 2021;Wang et al, 2021;Niu et al, 2021;Tang et al, 2020;Feng et al, 2021). These methods perform counterfactual inference over the model predictions according to a pre-defined causal graph.…”
Section: Related Workmentioning
confidence: 99%
“…Debiased training (Tu et al, 2020;Utama et al, 2020) eliminates the spurious correlation or bias in training data to enhance the generalization ability and deal with out-of-distribution samples. In addition to the training phase, a few inference techniques might improve the model performance on hard samples, including posterior regularization (Srivastava et al, 2018) and causal inference (Yu et al, 2020;Niu et al, 2021). However, both techniques require domain knowledge such as prior or causal graph tailored for specific applications.…”
Section: Related Workmentioning
confidence: 99%