Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-3026
|View full text |Cite
|
Sign up to set email alerts
|

Personalized neural language models for real-world query auto completion

Abstract: Query auto completion (QAC) systems are a standard part of search engines in industry, helping users formulate their query. Such systems update their suggestions after the user types each character, predicting the user's intent using various signals -one of the most common being popularity. Recently, deep learning approaches have been proposed for the QAC task, to specifically address the main limitation of previous popularity-based methods: the inability to predict unseen queries. In this work we improve prev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 18 publications
0
10
0
Order By: Relevance
“…Recent papers have also considered biasing LMs with context beyond the previous sentence and incorporate additional signals such as date-time, geolocation or gender (Ma et al, 2018;Diehl Martinez et al, 2021) or application metadata like dialog act or intent (Masumura et al, 2019;Shenoy et al, 2021;Liu and Lane, 2017). Other sources of context used to bias LMs are personalized content (Jaech and Ostendorf, 2018b;Fiorini and Lu, 2018); conversational turn-taking (Xiong et al, 2018); multi-modal sources (Moriya and Jones, 2018); or even user demographics to suggest fashion suggestions (Denk and Peleteiro Ramallo, 2020).…”
Section: Prior Workmentioning
confidence: 99%
“…Recent papers have also considered biasing LMs with context beyond the previous sentence and incorporate additional signals such as date-time, geolocation or gender (Ma et al, 2018;Diehl Martinez et al, 2021) or application metadata like dialog act or intent (Masumura et al, 2019;Shenoy et al, 2021;Liu and Lane, 2017). Other sources of context used to bias LMs are personalized content (Jaech and Ostendorf, 2018b;Fiorini and Lu, 2018); conversational turn-taking (Xiong et al, 2018); multi-modal sources (Moriya and Jones, 2018); or even user demographics to suggest fashion suggestions (Denk and Peleteiro Ramallo, 2020).…”
Section: Prior Workmentioning
confidence: 99%
“…Other approaches further improve over this by ranking suggestions based on previous queries, user profile, time-sensitivity, or coherence with the typed prefix [5,34,36,39]. Recent work also leverages deep learning models to generate additional features such as query likelihood using language models [30], personalized language models [12,14], or previous queries [36] to rank queries using a learning to rank framework [41]. However, these models typically rely on heuristic methods to generate a fixed number of candidates for all popular prefixes.…”
Section: Related Workmentioning
confidence: 99%
“…Size Publicly Available Citations AOL 16M queries 3M sessions Yes [38], [21], [11], [1], [12], [37], [40], [30], [31], [8], [35], [7], [6], [5], [19], [34], [10], [15], [20], [11], [13] MS MARCO 1M queries Yes [1], [40], [6], [5] Yahoo Search Engine 4M queries 549K sessions No [25] Tencent website 160M queries No [17], [16] "Baidu Knows" Website 85K pairs of (question, best answer) No [27] later switches to searching about dogs, which shows gradual topic drift revolving around the abstract concept of animals. For this reason, we believe that a gold standard dataset of queries is required that would not rely on the weak assumption of gradual query improvement within the same session.…”
Section: Dataset Namementioning
confidence: 99%