Proceedings of the Second Conference on Machine Translation 2017
DOI: 10.18653/v1/w17-4702
|View full text |Cite
|
Sign up to set email alerts
|

Improving Word Sense Disambiguation in Neural Machine Translation with Sense Embeddings

Abstract: Word sense disambiguation is necessary in translation because different word senses often have different translations. Neural machine translation models learn different senses of words as part of an endto-end translation task, and their capability to perform word sense disambiguation has so far not been quantified. We exploit the fact that neural translation models can score arbitrary translations to design a novel cross-lingual word sense disambiguation task that is tailored towards evaluating neural machine … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 101 publications
(65 citation statements)
references
References 18 publications
0
58
0
Order By: Relevance
“…While this entirely automatic setup could give rise to inconsistencies which would require manual correction as in Rios Gonzales et al (2017), we argue that BabelNet constraints already provide some filtering (for example mostly keeping number constant). Given our aim to scale up to a large number of languages, the need for human intervention would make the creation of a large scale multilingual benchmark difficult and costly.…”
Section: Methodsmentioning
confidence: 95%
See 1 more Smart Citation
“…While this entirely automatic setup could give rise to inconsistencies which would require manual correction as in Rios Gonzales et al (2017), we argue that BabelNet constraints already provide some filtering (for example mostly keeping number constant). Given our aim to scale up to a large number of languages, the need for human intervention would make the creation of a large scale multilingual benchmark difficult and costly.…”
Section: Methodsmentioning
confidence: 95%
“…However, all these test suites require significant amounts of expert knowledge and manual work for identifying the divergences and compiling the examples, which typically limits their coverage to a small number of language pairs and directions. For example, the test sets built by Rios Gonzales et al (2017) cover only 65 ambiguous words for two language pair directions.…”
Section: Introductionmentioning
confidence: 99%
“…In this respect, a model with predefined fixed patterns may struggle to encode global semantic features. To this end, we evaluate our models on two German-English WSD test suites, ContraWSD (Rios Gonzales et al, 2017) and MuCoW (Raganato et al, 2019). 9 Table 6 shows the performance of our models on the WSD benchmarks.…”
Section: Word Sense Disambiguationmentioning
confidence: 99%
“…mined by the structural features of the sentence. In particular, to commercialize the results of WSD, it will be necessary to address most words and their senses in a wide range of domains; these will range from information retrieval [41], [45], [49] or machine translation [4], [14], [30], [37] to even second language education [7], [50], using various sources such as movies and books.…”
Section: Introductionmentioning
confidence: 99%