Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.604
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Value of Personalized Word Embeddings

Abstract: In this paper, we introduce personalized word embeddings, and examine their value for language modeling. We compare the performance of our proposed prediction model when using personalized versus generic word representations, and study how these representations can be leveraged for improved performance. We provide insight into what types of words can be more accurately predicted when building personalized models. Our results show that a subset of words belonging to specific psycholinguistic categories tend to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…The diagram by Welch et al (2022) illustrates the utilization of data, models, and metrics (Figure 3). The diagram comprises two primary sections: a left rectangle demonstrating the computation of three similarity metrics and a right rectangle depicting two methods for leveraging similar user data to create personalized models.…”
Section: Transformer-based Personalizationmentioning
confidence: 99%
“…The diagram by Welch et al (2022) illustrates the utilization of data, models, and metrics (Figure 3). The diagram comprises two primary sections: a left rectangle demonstrating the computation of three similarity metrics and a right rectangle depicting two methods for leveraging similar user data to create personalized models.…”
Section: Transformer-based Personalizationmentioning
confidence: 99%
“…In addition to addressing to social and human factors, recent work has also focused on personalized language models (King and Cook, 2020;Jaech and Ostendorf, 2018) learning author representations (Delasalles et al, 2019) and personalized word embeddings (Lin et al, 2017) pointing out the importance of personalized semantics in understanding language. Welch et al (2020b) explore personalized versus generic word representations showing the benefits of both combined. While these models are trained for singular user, Mireshghallah et al (2021) trains a single shared model for all users for personalized sentiment analysis.…”
Section: Related Workmentioning
confidence: 99%
“…These may encode, say, that 'funny' is more subjective than 'violent' if organic text shows more disagreement in the usage of the former. Welch et al (2020) show that personalized word embeddings can be built using text from different users. Subjectivity could be extended to richer phrases with more expressive power (e.g., 'reminds me of my childhood').…”
Section: Research Challengesmentioning
confidence: 99%