The classification of science into disciplines is at the heart of bibliometric analyses. While most classifications systems are implemented at the journal level, their accuracy has been questioned, and paper-level classifications have been considered by many to be more precise. However, few studies investigated the difference between journal and the paper classification systems. This study addresses this gap by comparing the journal-and paper-level classifications for the same set of papers and journals. This isolates the effects of classification precision (i.e., journal-or paper-level) to reveal the extent of paper misclassification. Results show almost half of papers could be misclassified in journal classification systems. Given their importance in the construction and analysis of bibliometric indicators, more attention should be given to the robustness and accuracy of these disciplinary classifications schemes.
Many large digital collections are currently organized by subject; although useful, these information organization structures are large and complex and thus difficult to browse. Current online tools and visualization prototypes show small, localized subsets and do not provide the ability to explore the predominant patterns of the overall subject structure. This study describes subject tree modifications that facilitate browsing for documents by capitalizing on the highly uneven distribution of real‐world collections. The approach is demonstrated on two large collections organized by the Library of Congress Subject Headings (LCSH) and Medical Subject Headings (MeSH). Results show that the LCSH subject tree can be reduced to 49% of its initial complexity while maintaining access to 83% of the collection, and the MeSH tree can be reduced to 45% of its initial complexity while maintaining access to 97% of the collection. A simple solution to negate the loss of access is discussed. The visual impact is demonstrated by using traditional outline views and a slider control allowing searchers to change the subject structure dynamically according to their needs. This study has implications for the development of information organization theory and human–information interaction techniques for subject trees.
Computer users spend time every day interacting with digital files and folders, including downloading, moving, naming, navigating to, searching for, sharing, and deleting them. Such file management has been the focus of many studies across various fields, but has not been explicitly acknowledged nor made the focus of dedicated review. In this article we present the first dedicated review of this topic and its research, synthesizing more than 230 publications from various research domains to establish what is known and what remains to be investigated, particularly by examining the common motivations, methods, and findings evinced by the previously furcate body of work. We find three typical research motivations in the literature reviewed: understanding how and why users store, organize, retrieve, and share files and folders, understanding factors that determine their behavior, and attempting to improve the user experience through novel interfaces and information services. Relevant conceptual frameworks and approaches to designing and testing systems are described, and open research challenges and the significance for other research areas are discussed. We conclude that file management is a ubiquitous, challenging, and relatively unsupported activity that invites and has received attention from several disciplines and has broad importance for topics across information science.
This review describes experimental designs (users, search tasks, measures, etc.) used by 31 controlled user studies of information visualization (IV) tools for textual information retrieval (IR) and a meta-analysis of the reported statistical effects. Comparable experimental designs allow research designers to compare their results with other reports, and support the development of experimentally verified design guidelines concerning which IV techniques are better suited to which types of IR tasks. The studies generally use a within-subject design with 15 or more undergraduate students performing browsing to known-item tasks on sets of at least 1,000 full-text articles or Web pages on topics of general interest/news. Results of the meta-analysis (N ؍ 8) showed no significant effects of the IV tool as compared with a text-only equivalent, but the set shows great variability suggesting an inadequate basis of comparison. Experimental design recommendations are provided which would support comparison of existing IV tools for IR usability testing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.