Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.285
|View full text |Cite
|
Sign up to set email alerts
|

With More Contexts Comes Better Performance: Contextualized Sense Embeddings for All-Round Word Sense Disambiguation

Abstract: Contextualized word embeddings have been employed effectively across several tasks in Natural Language Processing, as they have proved to carry useful semantic information. However, it is still hard to link them to structured sources of knowledge. In this paper we present ARES (context-AwaRe Embeddings of Senses), a semi-supervised approach to producing sense embeddings for the lexical meanings within a lexical knowledge base that lie in a space that is comparable to that of contextualized word vectors. ARES r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
75
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

5
5

Authors

Journals

citations
Cited by 74 publications
(75 citation statements)
references
References 41 publications
0
75
0
Order By: Relevance
“…As previously stated, our objective has not been to dismiss the undeniable importance of syntaxbased innovation in SRL, but rather to establish a launch pad from which future syntactic developments can take off. In order to encourage future work on joint syntactic and semantic dependency parsing (Cai and Lapata, 2019b), the use of more powerful or cleverly trained language models (Lewis et al, 2020), the integration of SRL into other cross-lingual semantics-first tasks such as Semantic Parsing (Blloshmi et al, 2020) and Word Sense Disambiguation (Scarlini et al, 2020), and the exploitation and integration of newly available knowledge from recently released resources, such as VerbAtlas (Di Fabio et al, 2019) and Conception , we make available not only the code for our SRL model and experiments, but also our model checkpoints and training/validation logs at https://github.com/ SapienzaNLP/multi-srl.…”
Section: Discussionmentioning
confidence: 99%
“…As previously stated, our objective has not been to dismiss the undeniable importance of syntaxbased innovation in SRL, but rather to establish a launch pad from which future syntactic developments can take off. In order to encourage future work on joint syntactic and semantic dependency parsing (Cai and Lapata, 2019b), the use of more powerful or cleverly trained language models (Lewis et al, 2020), the integration of SRL into other cross-lingual semantics-first tasks such as Semantic Parsing (Blloshmi et al, 2020) and Word Sense Disambiguation (Scarlini et al, 2020), and the exploitation and integration of newly available knowledge from recently released resources, such as VerbAtlas (Di Fabio et al, 2019) and Conception , we make available not only the code for our SRL model and experiments, but also our model checkpoints and training/validation logs at https://github.com/ SapienzaNLP/multi-srl.…”
Section: Discussionmentioning
confidence: 99%
“…InVeRo is a growing platform: in the future, we plan to enhance our Model API by adding, alongside the already available state-of-the-art spanbased model, the state-of-the-art dependency-based model of Conia and Navigli (2020a), so that users can easily switch between the two approaches and choose the one that best suits their needs. Thanks to BabelNet and recent advances in cross-lingual techniques for tasks where semantics is crucial (Barba et al, 2020;Blloshmi et al, 2020;Conia and Navigli, 2020b;Pasini, 2020;Scarlini et al, 2020), we also plan to provide support for multiple languages to enable SRL integration into multilingual and cross-lingual settings. We believe that the InVeRo platform can make SRL more accessible to the research community, and we look forward to the development of semantics-first approaches in an ever wider range of NLP applications.…”
Section: Discussionmentioning
confidence: 99%
“…The ability to identify the intended sense of a polysemous word in a given context is one of the fundamental problems in lexical semantics. It is usually addressed with two different kinds of approaches relying on either sense-annotated corpora (Bevilacqua and Navigli, 2020;Scarlini et al, 2020;Blevins and Zettlemoyer, 2020) or knowledge bases (Moro et al, 2014;Agirre et al, 2014;Scozzafava et al, 2020). Both are usually evaluated on dedicated benchmarks, including at least five WSD tasks in Senseval and SemEval series, from 2001 (Edmonds and Cotton, 2001) to 2015 (Moro and Navigli, 2015a) that are included in the Raganato et al (2017)'s test suite.…”
Section: Word Sense Disambiguationmentioning
confidence: 99%