Defining and measuring internationality as a function of influence diffusion of scientific journals is an open problem. There exists no metric to rank journals based on the extent or scale of internationality. Measuring internationality is qualitative, vague, open to interpretation and is limited by vested interests. With the tremendous increase in the number of journals in various fields and the unflinching desire of academics across the globe to publish in "international" journals, it has become an absolute necessity to evaluate, rank and categorize journals based on internationality. Authors, in the current work have defined internationality as a measure of influence that transcends across geographic boundaries. There are concerns raised by the authors about unethical practices reflected in the process of journal publication whereby scholarly influence of a select few are artificially boosted, primarily by resorting to editorial manoeuvres. To counter the impact of such tactics, authors have come up with a new method that defines and measures internationality by eliminating such local effects when computing the influence of journals. A new metric, Non-Local Influence Quotient (NLIQ) is proposed as one such parameter for internationality computation along with another novel metric, Other-Citation Quotient as the complement of the ratio of self-citation and total citation. In addition, SNIP and International Collaboration Ratio are used as two other parameters. As these journal parameters are not readily available in one place, algorithms to scrape these metrics are written and documented as a part of the current manuscript. Cobb-Douglas production function is utilized as a model to compute JIMI (Journal Internationality Modeling Index). Current work elucidates the metric acquisition algorithms while delivering arguments in favor of the suitability of the proposed model. Acquired data is corroborated by different supervised learning techniques. As part of future work, the authors present a bigger picture, RAGIS-Reputation And Global Influence Score, that will be computed to facilitate the formation of clusters of journals of high, moderate and low internationality.
Expert investigators bring advanced skills and deep experience to analyze visual evidence, but they face limits on their time and attention. In contrast, crowds of novices can be highly scalable and parallelizable, but lack expertise. In this paper, we introduce the concept of shared representations for crowd--augmented expert work, focusing on the complex sensemaking task of image geolocation performed by professional journalists and human rights investigators. We built GroundTruth, an online system that uses three shared representations-a diagram, grid, and heatmap-to allow experts to work with crowds in real time to geolocate images. Our mixed-methods evaluation with 11 experts and 567 crowd workers found that GroundTruth helped experts geolocate images, and revealed challenges and success strategies for expert-crowd interaction. We also discuss designing shared representations for visual search, sensemaking, and beyond.
This article examines the historical expansion and convergence of the fields of information behavior and human–computer interaction, primarily in terms of the philosophy underlying each field. Information behavior grew out of research in library service provision in the early 1900s, and human–computer interaction grew out of computer science and human factors engineering in the 1960s. While these two fields have had different origins, purposes, and discourses, in recent decades, they have begun to converge. In this article, we map this convergence and consider implications for the future of the information field. We conceptualize their scholarly paradigms as expanding circles, and we show that the circles of information behavior and human–computer interaction are expanding in terms of ontology, epistemology, and axiology—and moreover, they are beginning to overlap substantially. While the two fields continue to be largely separate in terms of scholarly discourses, we suggest that much could be gained by explicitly acknowledging their shared components. Some suggestions for this are discussed, and these are connected to the ongoing iSchool Movement.
Investigators in fields such as journalism and law enforcement have long sought the public's help with investigations. New technologies have also allowed amateur sleuths to lead their own crowdsourced investigations - that have traditionally only been the purview of expert investigators - with mixed results. Through an ethnographic study of a four-day, co-located event with over 250 attendees, we examine the human infrastructure responsible for enabling the success of an expert-led crowdsourced investigation. We find that the experts enabled attendees to generate useful leads; the attendees formed a community around the event; and the victims' families felt supported. Additionally, the co-located setting, legal structures, and emergent social norms impacted collaborative work practice. We also surface three important tensions to consider in future investigations and provide design recommendations to manage these tensions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.