2019
DOI: 10.48550/arxiv.1902.09492
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…Cross-lingual embedding alignment find that independently trained monolingual word embedding spaces in ELMo are isometric under rotation. Similarly, Schuster et al (2019) and Wang et al (2019) geometrically align contextualized word embeddings trained independently. find that cross-lingual transfer in mBERT is possible even without shared vocabulary tokens, which they attribute to this isometricity.…”
Section: Related Workmentioning
confidence: 99%
“…Cross-lingual embedding alignment find that independently trained monolingual word embedding spaces in ELMo are isometric under rotation. Similarly, Schuster et al (2019) and Wang et al (2019) geometrically align contextualized word embeddings trained independently. find that cross-lingual transfer in mBERT is possible even without shared vocabulary tokens, which they attribute to this isometricity.…”
Section: Related Workmentioning
confidence: 99%
“…With the enhancement of the TF-IDF algorithm, bag-of-words addition model further achieves a performance improvement, and the term-by-term query translation model yields the state-ofthe-art performance for unsupervised cross-lingual retrieval (Litschko et al 2018). Another group of methods transfer the cross-lingual retrieval task to a monolingual retrieval task by using machine translation systems (Schuster et al 2019), e.g., combing cross-language tree kernel align with neural machine translation system for retrieval (Da San Martino et al 2017), learning language invariant representations for cross-lingual question re-ranking .…”
Section: Related Work Information Retrieval Modelsmentioning
confidence: 99%
“…In cross-lingual mapping learning, a linear mapping between the source embeddings space and the target embeddings space is learned in an adversarial fashion (Conneau et al 2018a). To enhance the quality of learned bilingual word embeddings, various refinement strategies are proposed, such as synthetic parallel vocabulary building (Artetxe, Labaka, and Agirre 2017), orthogonal constraint (Smith et al 2017), cross-domain similarity local scaling (Søgaard, Ruder, and Vulić 2018), self-boosting (Artetxe, Labaka, and Agirre 2018), byte-pair encodings (Sennrich, Haddow, and Birch 2016;Lample et al 2018). Alternatively, a context-dependent cross-lingual representation mapping based on pre-trained ELMo (Peters et al 2018) is proposed recently to boost the performance of cross-lingual learning.…”
Section: Low-resource Cross-lingual Learningmentioning
confidence: 99%
See 2 more Smart Citations