User studies demonstrate that nondomain experts do not use the same information-seeking strategies as domain experts. Because of the transformation of integrated library systems into Information Gateways in the late 1990s, both nondomain experts and domain experts have had available to them the wide range of information-seeking strategies in a single system. This article describes the results of a study to answer three research questions: (1) do nondomain experts enlist the strategies of domain experts? (2) if they do, how did they learn about these strategies? and (3) are they successful using them? Interviews, audio recordings, screen captures, and observations were used to gather data from 14 undergraduate students who searched an academic library's Information Gateway. The few times that the undergraduates in this study enlisted search strategies that were characteristic of domain experts, it usually took perseverance, trial-and-error, serendipity, or a combination of all three for them to find useful information. Although this study's results provide no compelling reasons for systems to support features that make domain-expert strategies possible, there is need for system features that scaffold nondomain experts from their usual strategies to the strategies characteristic of domain experts.
This article describes the findings of a research project that tested a new subject-access design in an experimental online catalog that had a wide range of subject-searching capabilities and search trees to govern the system's selection of searching capabilities in response to user queries. Library users at two academic libraries searched this experimental catalog for topics of their own choosing, judged the usefuless of retrieved titles, and answered post-search questions about their searching experiences. Mixed results from a quantitative analysis (i.e., precision scores) were supplemented with the more conclusive results from a qualitative analysis (i.e., failure analysis). Overall, analyses demonstrated that the new subject-access design that featured search trees was more effective in selecting a subject-searching approach that would produce useful information for the subjects users seek than users would select on their own. The qualitative analysis was especially helpful in providing recommendations for improving specific subject-searching approaches to increase their efficiency, increase user perseverance, and encourage browsing. It also suggested enhancements to the new subject-searching design to enable systems to respond to the wide variety of user queries for subjects.
The purpose of this paper ls to add to our understanding and knowledge of_ spelltng errors in onl:ine catalog searches based on empirical studies of spelling errors in online catalog searches and suggest ways in ohich srlstems that detect such errors should, hanrLle the errors that they detect. One study d,ue to collection failure. tr-l Din"" the introduction o1'online catalogs in the early I980s, librarians, system designers, and researchen have had a very accurate record of users' subiect and krownitem access points in the form ot'transaction logs. Dozens of researchers with varying intentions have studied the access points in these logs, especially access poiints that I'ailed to oroduce retrievals. Some researchers merely described the subiect and known-item access points that users entered into online catalogs, and others constructed rather elaborate schemes fbr categorizing access points that were sucrcessful or unsuccesslul at producing retrievals. One recurring problem that prevents the retrieval of bibliographic records is the occurrence of spelling errors in online catalog access points. Summing uP our knowledge about spelling errors, we know that users make spelling error; such errors are not very common in online catalog searches. but thev do result in searches that {ail to yield retrievals; and systems can be programmed to detect spelling erron in user-entered access points. KAREN M. DnRnnNst
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.