Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing 2023
DOI: 10.18653/v1/2023.emnlp-main.491
|View full text |Cite
|
Sign up to set email alerts
|

Don’t Trust ChatGPT when your Question is not in English: A Study of Multilingual Abilities and Types of LLMs

Xiang Zhang,
Senyu Li,
Bradley Hauer
et al.

Abstract: Large language models (LLMs) have demonstrated exceptional natural language understanding abilities, and have excelled in a variety of natural language processing (NLP) tasks. Despite the fact that most LLMs are trained predominantly on English, multiple studies have demonstrated their capabilities in a variety of languages. However, fundamental questions persist regarding how LLMs acquire their multilingual abilities and how performance varies across different languages. These inquiries are crucial for the st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…However, these datasets are often disproportionately dominated by English content (Brown et al, 2020;Chowdhery et al, 2022;Workshop et al, 2023), resulting in an Englishcentric bias in LLMs. This imbalance can subsequently hinder the models' proficiency in other languages, often leading to suboptimal performance in non-English contexts (Ahuja et al, 2023;Lai et al, 2023;Zhang et al, 2023b).…”
Section: Introductionmentioning
confidence: 99%
“…However, these datasets are often disproportionately dominated by English content (Brown et al, 2020;Chowdhery et al, 2022;Workshop et al, 2023), resulting in an Englishcentric bias in LLMs. This imbalance can subsequently hinder the models' proficiency in other languages, often leading to suboptimal performance in non-English contexts (Ahuja et al, 2023;Lai et al, 2023;Zhang et al, 2023b).…”
Section: Introductionmentioning
confidence: 99%