Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.211
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Multilingual Fairness in Pre-trained Multimodal Representations

Abstract: Recently pre-trained multimodal models, such as CLIP , have shown exceptional capabilities towards connecting images and natural language. The textual representations in English can be desirably transferred to multilingualism and support downstream multimodal tasks for different languages. Nevertheless, the principle of multilingual fairness is rarely scrutinized: do multilingual multimodal models treat languages equally? Are their performances biased towards particular languages? To answer these questions, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…Zhao et al (2020) found that when a model trained on one language is deployed to another language, it also carries bais from the source to the target languages. Wang et al (2022) considered languages as fair objects and found that multimodal models are biased against different languages. Cho et al ( 2021) examined the issue of gender bias in translation systems in German, Korean, Portuguese, and Tagalog and found that scaling up language resources may amplify the bias cross-linguistically.…”
Section: Research Of Multilingual Text Debiasing Methodsmentioning
confidence: 99%
“…Zhao et al (2020) found that when a model trained on one language is deployed to another language, it also carries bais from the source to the target languages. Wang et al (2022) considered languages as fair objects and found that multimodal models are biased against different languages. Cho et al ( 2021) examined the issue of gender bias in translation systems in German, Korean, Portuguese, and Tagalog and found that scaling up language resources may amplify the bias cross-linguistically.…”
Section: Research Of Multilingual Text Debiasing Methodsmentioning
confidence: 99%
“…2) Approach#2: We adopted a methodology similar to Approach#1, with a modification involving the utilization of Arabic pre-trained encoders ( [24], [41]) and multilingual pre-trained language models [42] to extract the vector representations for each paragraph p j and question q. Notably, this approach involves obtaining these vectors without fine-tuning the model on the Arabic-NarrativeQA dataset.…”
Section: Arabic-narrativeqa Systemmentioning
confidence: 99%
“…• We study the automatic evaluation based on a "general" vision-language model CLIP, which is flexible and powerful. Consequently, this also limits the upper bound performance of our metrics as CLIP is not trained for classifying attributes and might contain bias itself (Agarwal et al, 2021;Wang et al, 2021Yamada et al, 2022). For a more accurate evaluation, leveraging task-specific models (Chia et al, 2022) based on the fashion research in computer vision (Liu et al, 2016;Cheng et al, 2021) could be helpful, which we leave for future exploration.…”
Section: Limitationsmentioning
confidence: 99%