2019
DOI: 10.48550/arxiv.1906.01622
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…For mapping across different embedding spaces, we use vecmap toolkit 1 . We follow Zhang et al (2019) to pre-process the embeddings, i.e., the embeddings are unit-normed, mean-centered and unitnormed again. For bilingual induction, we follow the steps outlined by (Artetxe et al, 2018a), i.e., whitening each space, and solving Procrustes.…”
Section: Vecmap Toolkitmentioning
confidence: 99%
“…For mapping across different embedding spaces, we use vecmap toolkit 1 . We follow Zhang et al (2019) to pre-process the embeddings, i.e., the embeddings are unit-normed, mean-centered and unitnormed again. For bilingual induction, we follow the steps outlined by (Artetxe et al, 2018a), i.e., whitening each space, and solving Procrustes.…”
Section: Vecmap Toolkitmentioning
confidence: 99%
“…We use VecMap toolkit 1 for mapping across different embedding spaces. For this, we pre-process the embeddings using a process flow outlined by Zhang et al (2019). The embeddings are unitnormed, mean-centered followed by another round of unit-normalization.…”
Section: Vecmap Toolkitmentioning
confidence: 99%
“…Tagowski et al [17] applies the embedding alignment technique to the graph domain, where they align a set of node2vec embeddings [4] learned over different snapshots of an evolving graph. However, all these methods assume the embeddings are fixed, which could result in a large alignment error if two sets of pretrained embeddings are very distinct [23]. Unlike these methods, we jointly learn the embeddings along with the backward transformation function, achieving much better alignment performance and better unintended task performance.…”
Section: Related Workmentioning
confidence: 99%