2020
DOI: 10.48550/arxiv.2004.13939
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evaluating Transformer-Based Multilingual Text Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Various studies have assessed the cross-linguality of pretrained language models. Recent efforts have approached this question via performance on an array of downstream NLP tasks (Conneau et al, 2020;Dredze, 2019, 2020;Karthikeyan et al, 2020;Pires et al, 2019;Groenwold et al, 2020), and others have proposed methods for better cross-lingual alignment in light of systematic crosslingual deficiencies Xia et al, 2021). Our study hews closest methodologically to and Dubossarsky et al (2020), who investigate the determinants of cross-lingual isomorphism using monolingual fastText embeddings Mikolov et al, 2013).…”
Section: Related Workmentioning
confidence: 85%
See 1 more Smart Citation
“…Various studies have assessed the cross-linguality of pretrained language models. Recent efforts have approached this question via performance on an array of downstream NLP tasks (Conneau et al, 2020;Dredze, 2019, 2020;Karthikeyan et al, 2020;Pires et al, 2019;Groenwold et al, 2020), and others have proposed methods for better cross-lingual alignment in light of systematic crosslingual deficiencies Xia et al, 2021). Our study hews closest methodologically to and Dubossarsky et al (2020), who investigate the determinants of cross-lingual isomorphism using monolingual fastText embeddings Mikolov et al, 2013).…”
Section: Related Workmentioning
confidence: 85%
“…Recent work has looked at the typological and training-related factors affecting cross-lingual alignment in monolingual embedding space Dubossarsky et al, 2020), assessed the cross-linguality of pretrained language models using probing tasks and downstream performance measures (Conneau et al, 2020;Dredze, 2019, 2020;Pires et al, 2019;Groenwold et al, 2020), and probed Transformer models (Wolf et al, 2020) for linguistic structure (see Rogers et al 2020 for an overview of over 150 studies). However, a gap in the research exists regarding the following question: What are the linguistic, quasi-linguistic, and training-related factors determining the crosslinguality of sentence representations in shared embedding space, and what are the relative weights of these factors?…”
Section: Introductionmentioning
confidence: 99%