Query auto completion is known to provide poor predictions of the user's query when her input prefix is very short (e.g., one or two characters). In this paper we show that context, such as the user's recent queries, can be used to improve the prediction quality considerably even for such short prefixes. We propose a context-sensitive query auto completion algorithm, NearestCompletion, which outputs the completions of the user's input that are most similar to the context queries. To measure similarity, we represent queries and contexts as high-dimensional term-weighted vectors and resort to cosine similarity. The mapping from queries to vectors is done through a new query expansion technique that we introduce, which expands a query by traversing the query recommendation tree rooted at the query.In order to evaluate our approach, we performed extensive experimentation over the public AOL query log. We demonstrate that when the recent user's queries are relevant to the current query she is typing, then after typing a single character, NearestCompletion's MRR is 48% higher relative to the MRR of the standard MostPopularCompletion algorithm on average. When the context is irrelevant, however, NearestCompletion's MRR is essentially zero. To mitigate this problem, we propose HybridCompletion, which is a hybrid of NearestCompletion with MostPopularCompletion. HybridCompletion is shown to dominate both NearestCompletion and MostPopularCompletion, achieving a total improvement of 31.5% in MRR relative to MostPopularCompletion on average.
We present NearBucket-LSH, an effective algorithm for similarity search in large-scale distributed online social networks organized as peer-to-peer overlays. As communication is a dominant consideration in distributed systems, we focus on minimizing the network cost while guaranteeing good search quality. Our algorithm is based on Locality Sensitive Hashing (LSH), which limits the search to collections of objects, called buckets, that have a high probability to be similar to the query. More specifically, NearBucket-LSH employs an LSH extension that searches in near buckets, and improves search quality but also significantly increases the network cost. We decrease the network cost by considering the internals of both LSH and the P2P overlay, and harnessing their properties to our needs. We show that our NearBucket-LSH increases search quality for a given network cost compared to previous art. In many cases, the search quality increases by more than 50%.
Today's search engines process billions of online user queries a day over huge collections of data. In order to scale, they distribute query processing among many nodes, where each node holds and searches over a subset of the index called shard. Responses from some nodes occasionally fail to arrive within a reasonable time-interval due to various reasons, such as high server load and network congestion. Search engines typically need to respond in a timely manner, and therefore skip such tail latency responses, which causes degradation in search quality. In this paper, we tackle response misses due to high tail latencies with the goal of maximizing search quality.Search providers today use redundancy in the form of Replication for mitigating response misses, by constructing multiple copies of each shard and searching all replicas. This approach is not ideal, as it wastes resources on searching duplicate data. We propose two strategies to reduce this waste. First, we propose rSmartRed, an optimal shard selection scheme for replicated indexes. Second, when feasible, we propose to replace Replication with Repartition, which constructs independent index instances instead of exact copies. We analytically prove that rSmartRed's selection is optimal for Replication, and that Repartition achieves better search quality compared to Replication. We confirm our results with an empirical study using two real-world datasets, showing that rSmartRed improves recall compared to currently used approaches. We additionally show that Repartition improves over Replication in typical scenarios.
Similarity search is the task of retrieving data items that are similar to a given query. In this paper, we introduce the time-sensitive notion of similarity search over endless data-streams (SSDS), which takes into account data quality and temporal characteristics in addition to similarity. SSDS is challenging as it needs to process unbounded data, while computation resources are bounded. We propose Stream-LSH, a randomized SSDS algorithm that bounds the index size by retaining items according to their freshness, quality, and dynamic popularity attributes. We analytically show that Stream-LSH increases the probability to find similar items compared to alternative approaches using the same space capacity. We further conduct an empirical study using real world stream datasets, which confirms our theoretical results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.