Proceedings of the 22nd Australasian Document Computing Symposium 2017
DOI: 10.1145/3166072.3166083
|View full text |Cite
|
Sign up to set email alerts
|

Large-scale Generative Query Autocompletion

Abstract: ABSTRACTery Autocompletion (QAC) systems are interactive tools that assist a searcher in entering a query, given a partial query prex. Existing QAC research -with a number of notable exceptionsrelies upon large existing query logs from which to extract historical queries. ese queries are then ordered by some ranking algorithm as candidate completions, given the query prex. Given the numerous search environments (e.g. enterprises, personal or secured data repositories) in which large query logs are unavailable,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…In the future, we plan to improve the performance of PrefXMRtree models by exploring label space augmentation techniques and using partial prefixes for generating auto-complete suggestions [25,27]. We also plan to use dense embeddings for representing contextual information such as previous queries, the user-profile, or time to further improve the performance of PrefXMRtree models for the task of query auto-completion.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the future, we plan to improve the performance of PrefXMRtree models by exploring label space augmentation techniques and using partial prefixes for generating auto-complete suggestions [25,27]. We also plan to use dense embeddings for representing contextual information such as previous queries, the user-profile, or time to further improve the performance of PrefXMRtree models for the task of query auto-completion.…”
Section: Discussionmentioning
confidence: 99%
“…gestions[25,27] but we leave that to future work and compare all the models on test data restricted to cases where the ground-truth label (next query) is present in training data, and analyse the effect of prefix length and label frequency on performance of all models.…”
mentioning
confidence: 99%
“…In the future, we plan to improve the performance of PrefXMRtree models by exploring label space augmentation techniques and using partial prefixes for generating auto-complete suggestions [29,31]. We also plan to use dense embeddings for representing contextual information such as previous queries, the user-profile, or time to further improve the performance of PrefXMRtree models for the task of query auto-completion.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, for the remaining test data points, PrefXMRtree models and MFQ cannot generate correct suggestions, while the generative seq2seq models are sometimes able to. The performance of PrefXMRtree can be improved by augmenting the label space to improve coverage and/or using a partial prefix for generating suggestions [29,31] but we leave that to future work and compare all the models on test data restricted to cases where the groundtruth label (next query) is present in training data, and analyse the effect of prefix length and label frequency on performance of all models.…”
Section: Comparison With Other Baselinesmentioning
confidence: 99%
“…A QAC system retrieves a candidate set matching the partial query P, drawing from a target string collection, with strings in the target collection having an associated score. Query Auto Completion systems typically match P against past queries from a log; or, in the absence of logs, they can also be synthesized [14,40]. Methods of ranking the candidates include static popularity [9], search context [9,29], forecast popularity [15], personalized ranking parameters [15,29,42], and diversity [16].…”
Section: Introductionmentioning
confidence: 99%