2022
DOI: 10.48550/arxiv.2202.07603
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fairness Indicators for Systematic Assessments of Visual Feature Extractors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(10 citation statements)
references
References 35 publications
0
10
0
Order By: Relevance
“…We full-finetune all models on same subset of ImageNet-22K dataset. We then, for each gender and skintone perform inference of transferred models on the Casual Conversations Dataset and measure percentage of images associated with different labels at confidence threshold 0.1 following [51]. We observe that our model makes the least Harmful predictions and most Human predictions on images of people.…”
Section: Indicator1: Same Attribute Retrievalmentioning
confidence: 94%
See 4 more Smart Citations
“…We full-finetune all models on same subset of ImageNet-22K dataset. We then, for each gender and skintone perform inference of transferred models on the Casual Conversations Dataset and measure percentage of images associated with different labels at confidence threshold 0.1 following [51]. We observe that our model makes the least Harmful predictions and most Human predictions on images of people.…”
Section: Indicator1: Same Attribute Retrievalmentioning
confidence: 94%
“…Recently, effort has been made by Yang et al [136] to reduce these biases by removing 2,702 synsets (out of 2,800 total) from the person subtree used in ImageNet. Motivated by the importance of building socially responsible models, we follow recent works [51] to systematically study the fairness, harms and biases of our models trained using self-supervised learning on random group of internet images.…”
Section: Large-scale Benchmarking Of Computer Vision Modelsmentioning
confidence: 99%
See 3 more Smart Citations