Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1379
|View full text |Cite
|
Sign up to set email alerts
|

Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View

Abstract: Diachronic word embeddings have been widely used in detecting temporal changes. However, existing methods face the meaning conflation deficiency by representing a word as a single vector at each time period. To address this issue, this paper proposes a sense representation and tracking framework based on deep contextualized embeddings, aiming at answering not only what and when, but also how the word meaning changes. The experiments show that our framework is effective in representing fine-grained word senses,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
70
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(70 citation statements)
references
References 34 publications
0
70
0
Order By: Relevance
“…Future work can use anomaly detection approaches operating on our model's predicted word vectors to detect anomalies in a word's representation across time. We also plan to investigate different architectures, such as Variational Autoencoders (Kingma and Welling, 2014), and incorporate contextual representations (Devlin et al, 2019;Hu et al, 2019) to detect new senses of words. A limitation of our work is that it has been tested on a single dataset, where 65 words have undergone semantic change; testing our models in datasets of different duration and in different languages will provide clearer evidence of their effectiveness.…”
Section: Discussionmentioning
confidence: 99%
“…Future work can use anomaly detection approaches operating on our model's predicted word vectors to detect anomalies in a word's representation across time. We also plan to investigate different architectures, such as Variational Autoencoders (Kingma and Welling, 2014), and incorporate contextual representations (Devlin et al, 2019;Hu et al, 2019) to detect new senses of words. A limitation of our work is that it has been tested on a single dataset, where 65 words have undergone semantic change; testing our models in datasets of different duration and in different languages will provide clearer evidence of their effectiveness.…”
Section: Discussionmentioning
confidence: 99%
“…Further investigations are possible, stemming from the fact that senses and terms share the same multilingual semantic space: For example, we are allowed to compare and unveil meaning connections between terms across different languages. Such capabilities can be useful in characterizing subtle and elusive meaning shift phenomena, such as diachronic sense modeling (Hu, Li, and Liang 2019) and conceptual misalignment, which is a well-known issue, for example, in the context of automatic translation. This issue has been approached, for the translation of European laws, through the design of formal ontologies (Ajani et al 2010).…”
Section: Discussionmentioning
confidence: 99%
“…This provides new opportunities for diachronic analysis: for example, it is possible to group similar token representations and measure a diversity of such representations, while predefined number of senses is not strictly necessary. Thus, currently there is an increased interest in the topic of language change detection using contextualized word embeddings [9,10,14,21,27,28].…”
Section: Contextualized Word Embeddingsmentioning
confidence: 99%