Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.122
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Similar Users for Personalized Language Modeling with Limited Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…In contrast to Welch et al (2022a), we found that authorship attribution representations can scale to a large number of users by learning a projection layer to reduce it to a similar size as the text encoding. Contrary to King and Cook (2020), who found that priming outperformed other methods in relatively low data settings (like ours), we find that it underperforms other methods at the verdict-level.…”
Section: Resultsmentioning
confidence: 85%
See 1 more Smart Citation
“…In contrast to Welch et al (2022a), we found that authorship attribution representations can scale to a large number of users by learning a projection layer to reduce it to a similar size as the text encoding. Contrary to King and Cook (2020), who found that priming outperformed other methods in relatively low data settings (like ours), we find that it underperforms other methods at the verdict-level.…”
Section: Resultsmentioning
confidence: 85%
“…Welch et al (2020b) modeled users by learning separate embedding matrices for each user in a shared embedding space. Welch et al (2022a) explored how to model users based on their similarity to others. They used the perplexity of personalized models and the predictions of an authorship attribution classifier to generate user representations.…”
Section: Personalizationmentioning
confidence: 99%
“…We discuss these decisions in §5.1. Nonetheless, in the case of scarce data on a single user's preferences, personalised LLMs may leverage similar users [233] or make inferences about their preferences and values from limited information. Making assumptions about the user (especially if they are demographically or geographically-informed) is a form of algorithmic profiling, risking the non-consensual categorisation of peoples [220].…”
Section: Ir4 Essentialism and Profilingmentioning
confidence: 99%
“…One might think, for example, that restricting a named entity to being comparable only to another named entity cannot damage the results, but experience has shown that this discrimination leads to a bad similarity between expressions such as "the Japanese president… 'And' in Japan, the president…" [26]. The system whose results are given in the evaluations thus operates without any restriction as to the syntactic nature of the terms compared [27]. Below is an example of calculating the similarity between two sentences.…”
Section: Similarity Of Mihalcia [23]mentioning
confidence: 99%