Evaluation measures act as objective functions to be optimized by information retrieval systems. Such objective functions must accurately reflect user requirements, particularly when tuning IR systems and learning ranking functions. Ambiguity in queries and redundancy in retrieved documents are poorly reflected by current evaluation measures. In this paper, we present a framework for evaluation that systematically rewards novelty and diversity. We develop this framework into a specific evaluation measure, based on cumulative gain. We demonstrate the feasibility of our approach using a test collection based on the TREC question answering track.
Understanding the intent underlying user queries may help personalize search results and improve user satisfaction. In this paper, we develop a methodology for using ad clickthrough logs, query specific information, and the content of search engine result pages to study characteristics of query intents, specially commercial intent. The findings of our study suggest that ad clickthrough features, query features, and the content of search engine result pages are together effective in detecting query intent. We also study the effect of query type and the number of displayed ads on the average clickthrough rate. As a practical application of our work, we show that modeling query intent can improve the accuracy of predicting ad clickthrough for previously unseen queries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.