The purpose of this study is to investigate the extent to which two theories, Information Scent and Need for Cognition, explain people's search behaviors when interacting with search engine results pages (SERPs). Information Scent, the perception of the value of information sources, was manipulated by varying the number and distribution of relevant results on the first SERP. Need for Cognition (NFC), a personality trait that measures the extent to which a person enjoys cognitively effortful activities, was measured by a standardized scale. A laboratory experiment was conducted with forty-eight participants, who completed six openended search tasks. Results showed that while interacting with SERPs containing more relevant documents, participants examined more documents and clicked deeper in the search result list. When interacting with SERPs that contained the same number of relevant results distributed across different ranks, participants were more likely to abandon their queries when relevant documents appeared later on the SERP. With respect to NFC, participants with higher NFC paginated less frequently and paid less attention to results at lower ranks than those with lower NFC. The interaction between NFC and the number of relevant results on the SERP affected the time spent on searching and a participant's likelihood to reformulate, paginate and stop. Our findings suggest evaluating system effectiveness based on the first page of results, even for tasks that require the user to view multiple documents, and varying interface features based on NFC.
Human assessments of document relevance are needed for the construction of test collections, for ad-hoc evaluation, and for training text classifiers. Showing documents to assessors in different orderings, however, may lead to different assessment outcomes. We examine the effect that threshold priming, seeing varying degrees of relevant documents, has on people's calibration of relevance. Participants judged the relevance of a prologue of documents containing highly relevant, moderately relevant, or non-relevant documents, followed by a common epilogue of documents of mixed relevance. We observe that participants exposed to only non-relevant documents in the prologue assigned significantly higher average relevance scores to prologue and epilogue documents than participants exposed to moderately or highly relevant documents in the prologue. We also examine how need for cognition, an individual difference measure of the extent to which a person enjoys engaging in effortful cognitive activity, impacts relevance assessments. High need for cognition participants had a significantly higher level of agreement with expert assessors than low need for cognition participants did. Our findings indicate that assessors should be exposed to documents from multiple relevance levels early in the judging process, in order to calibrate their relevance thresholds in a balanced way, and that individual difference measures might be a useful way to screen assessors.
Aggregated search is the task of blending results from specialized search services or verticals into the Web search results. While many studies have focused on aggregated search techniques, few studies have tried to better understand how users interact with aggregated search results. This study investigates how task complexity and vertical display (the blending of vertical results into the web results) affect the use of vertical content. Twenty-nine subjects completed six search tasks of varying levels of task complexity using two aggregated search interfaces: one that blended vertical results into the web results and one that only provided indirect vertical access. Our results show that more complex tasks required significantly more interaction and that subjects completing these tasks examined more vertical results. While the amount of interaction was the same between interfaces, subjects clicked on more vertical results when these were blended into the web results. Our results also show an interaction between task complexity and vertical display; subjects clicked on more verticals when completing the more complex tasks with the interface that blended vertical results. Subjects' evaluations of the two interfaces were nearly identical, but when analyzed with respect to their interface preferences, we found a positive relationship between system evaluations and individual preferences. Subjects justified their preference using similar rationales and their comments illustrate how the display itself can influence judgments of information quality, especially in cases when the vertical results might not be relevant to the search task. Categories and Subject Descriptors General TermsPerformance, Experimentation, Human Factors. KeywordsAggregated search interfaces, search behaviors, evaluation, user study, interaction, task complexity INTRODUCTIONIn addition to Web search, commercial search companies (e.g., Google, Bing, Yahoo!) provide access to a wide range of specialized services known as verticals (e.g., images, video, news Most published research in aggregated search has focused on automatic methods for predicting which verticals to present (vertical selection) [4,5,11,19] and where in the Web results to present them (vertical presentation) [2,3,23]. Evaluation of these systems has typically been conducted by using editorial vertical relevance judgements as the gold standard [2,3,4,5,19], or by using user-generated clicks on vertical results as a proxy for relevance [11,23]. While these studies have greatly advanced the state of the art in aggregated search techniques, because users are far removed from the evaluation, they have contributed little insight about how users' higher-level objectives influence their engagement with vertical search results.A few published studies have investigated user behavior with aggregated search interfaces [24,25,28]. Thus far, these studies show two major trends. First, when a vertical is relevant, users prefer to see its results towards to the top of the blended results [...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.