2019
DOI: 10.48550/arxiv.1911.11834
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation

Abstract: Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…A final approach compares the distribution of tags applied to images that depict different social groups in a image tagging system's training dataset to the distribution of tags applied by the system to images that depict these social groups (Wang et al 2019(Wang et al , 2020Zhao et al 2017;Kay, Matuszek, and Munson 2015a;. This approach is premised on the belief that the outputs of image tagging systems should not exacerbate any differences between social groups that are present in their training datasets.…”
Section: Computational Measurement Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…A final approach compares the distribution of tags applied to images that depict different social groups in a image tagging system's training dataset to the distribution of tags applied by the system to images that depict these social groups (Wang et al 2019(Wang et al , 2020Zhao et al 2017;Kay, Matuszek, and Munson 2015a;. This approach is premised on the belief that the outputs of image tagging systems should not exacerbate any differences between social groups that are present in their training datasets.…”
Section: Computational Measurement Approachesmentioning
confidence: 99%
“…Another performance-based approach tests whether image tagging systems exhibit comparable performance for images that depict different social groups when applying any tags. For example, researchers have investigated whether image tagging systems perform better when tagging everyday objects in images that depict some social groups than they do for images that depict others, finding performance disparities between genders (Bhargava and Forsyth 2019;Wang et al 2020), ages (Wang et al 2020) Perturbation-based approaches involve varying aspects of images provided as inputs to image tagging systems to see whether different tags are applied, either correctly or incorrectly. These approaches are premised on two beliefs: first, that the behaviors of image tagging systems should not reflect spurious correlations in their training datasets; and second, that particular aspects of images should not be the basis for particular differences in tagging behaviors (e.g., the appearance of a person's face should not affect the occupation with which they are tagged).…”
Section: Computational Measurement Approachesmentioning
confidence: 99%