Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.299
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study of the Downstream Reliability of Pre-Trained Word Embeddings

Abstract: While pre-trained word embeddings have been shown to improve the performance of downstream tasks, many questions remain regarding their reliability: Do the same pre-trained word embeddings result in the best performance with slight changes to the training data? Do the same pre-trained embeddings perform well with multiple neural network architectures? Do imputation strategies for unknown words impact reliability? In this paper, we introduce two new metrics to understand the downstream reliability of word embed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…The latter result in much lower bias measurements. This is all the more important for FPED and FNED, as they have been very influential, with many works relying exclusively on these metrics (Rios, 2020;Huang et al, 2020b;Gencoglu, 2021;Rios and Lwowski, 2020).…”
Section: Empirical Metric Comparisonmentioning
confidence: 99%
“…The latter result in much lower bias measurements. This is all the more important for FPED and FNED, as they have been very influential, with many works relying exclusively on these metrics (Rios, 2020;Huang et al, 2020b;Gencoglu, 2021;Rios and Lwowski, 2020).…”
Section: Empirical Metric Comparisonmentioning
confidence: 99%