The ideal user interface is comprehensible, predictable, and controllable, but many current textsearch interfaces-especially on the World-Wide Web-involve unnecessarily complex and obscure features. The result is confusion and frustration for advanced users as well as for beginners, scientists, and students [8].Even when a user interface's design is improved, inconsistencies can cause mistaken assumptions and increase the likelihood of failure to find relevant documents as users move from one search system to another. For example, the search string "Hall effect" could produce various searches, including:• Exact match for "Hall effect" • Case-insensitive match for "hall effect" • Best match for "Hall" and "effect" • Boolean match for "Hall" and "effect" • Boolean match for "Hall" or "effect" Few systems spell out the interpretation they are using. Furthermore, systems often use surprising query transformations, unpredictable stemming algorithms, and mysterious weightings for fields. And in many systems, the results are displayed in a relevance ranking whose meaning is a mystery to many users (and sometimes a proprietary secret).
Although a substantial number of research projects have addressed music information retrieval over the past three decades, the field is still very immature. Few of these projects involve complex (polyphonic) music; methods for evaluation are at a very primitive stage of development; none of the projects tackles the problem of realistically large-scale databases. Many problems to be faced are due to the nature of music itself. Among these are issues in human perception and cognition of music, especially as they concern the recognizability of a musical phrase. This paper considers some of the most fundamental problems in music information retrieval, challenging the common assumption that searching on pitch (or pitch-contour) alone is likely to be satisfactory for all purposes. This assumption may indeed be true for most monophonic (single-voice) music, but it is certainly inadequate for polyphonic (multi-voice) music. Even in the monophonic case it can lead to misleading results. The fact, long recognized in projects involving monophonic music, that a recognizable passage is usually not identical with the search pattern means that approximate matching is almost always necessary, yet this too is severely complicated by the demands of polyphonic music. Almost all text-IR methods rely on identifying approximate units of meaning, that is, words. A fundamental problem in music IR is that locating such units is extremely difficult, perhaps impossible.
We are interested in questions of improving user control in bestmatch text-retrieval systems, specifically questions as to whether simple visualizations that nonetheless go beyond the minimal ones generally available can significantly help users. Recently, we have been investigating ways to help users decide-given a set of documents retrieved by a query-which documents and passages are worth closer examination.We built a document viewer incorporating a visualization centered around a novel content-displaying scrollbar and color term highlighting, and studied whether the visualization is helpful to non-expert searchers. Participants' reaction to the visualization was very positive, while the objective results were inconclusive.
We are interested in how ideas from document clustering can be used to improve the retrieval accuracy of ranked lists in interactive systems. In particular, we are interested in ways to evaluate the eectiveness of such systems to decide how they might best be constructed. In this study, we construct and evaluate systems that present the user with ranked lists and a visualization of inter-document similarities. We ®rst carry out a user study to evaluate the clustering/ranked list combination on instance-oriented retrieval, the task of the TREC-6 Interactive Track. We ®nd that although users generally prefer the combination, they are not able to use it to improve eectiveness. In the second half of this study, we develop and evaluate an approach that more directly combines the ranked list with information from inter-document similarities. Using the TREC collections and relevance judgments, we show that it is possible to realize substantial improvements in eectiveness by doing so, and that although users can use the combined information eectively, the system can provide hints that substantially improve on the user's solo eort. The resulting approach shares much in common with an interactive application of incremental relevance feedback. Throughout this study, we illustrate our work using two prototype systems constructed for these evaluations. The ®rst, AspInQuery, is a classic information retrieval system augmented with a specialized tool for recording information about instances of relevance. The other system, Lighthouse, is a Web-based application that combines a ranked list with a portrayal of inter-document similarity. Lighthouse can work with collections such as TREC, as well as the results of Web search engines. Ó
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.