Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2006
DOI: 10.1145/1148170.1148177
|View full text |Cite
|
Sign up to set email alerts
|

Improving web search ranking by incorporating user behavior information

Abstract: We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the acc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
455
0
2

Year Published

2007
2007
2016
2016

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 828 publications
(460 citation statements)
references
References 16 publications
3
455
0
2
Order By: Relevance
“…The underlying assumption was that a result with a larger amount of clicks is more relevant to the query than a result with fewer clicks. Agichtein et al (2006) proposed an idea of aggregating information from many unreliable user search sessions, instead of treating each user as a reliable expert to predict user relevance assessment of search results. Dou et al (2008) used aggregate click-through logs to learn the ranking of search results, and found that the aggregation of a large number of user clicks is indicative of relevance preferences.…”
Section: "Wisdom Of Crowds" Techniques and Information Retrievalmentioning
confidence: 99%
“…The underlying assumption was that a result with a larger amount of clicks is more relevant to the query than a result with fewer clicks. Agichtein et al (2006) proposed an idea of aggregating information from many unreliable user search sessions, instead of treating each user as a reliable expert to predict user relevance assessment of search results. Dou et al (2008) used aggregate click-through logs to learn the ranking of search results, and found that the aggregation of a large number of user clicks is indicative of relevance preferences.…”
Section: "Wisdom Of Crowds" Techniques and Information Retrievalmentioning
confidence: 99%
“…In (Agichtein, Brill, & Dumais, 2006) by using many click-through data features as user feedback in the ranking process both directly and indirectly, they found very interesting results. They used 3,000 user queries for evaluation and found a 31% increase in ranking quality in comparison to other ranking algorithms.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Recent studies show that users tend to click on the documents in the first ranks of the ranking list. The study results of Agichtein et al on frequency distribution of relevance of users' clicks on a web search results have shown that the relative number of clicks on documents decreases with lower rank [3], and users click on documents in the second, third and fourth rank, respectively, with probability about 60%, 50% and 30% [4]. This shows that users click on high rank documents in the results, even if the documents are irrelevant.…”
Section: Introductionmentioning
confidence: 96%