The proliferation of misinformation in online news and its amplification by platforms are a growing concern, leading to numerous efforts to improve the detection of and response to misinformation. Given the variety of approaches, collective agreement on the indicators that signify credible content could allow for greater collaboration and data-sharing across initiatives. In this paper, we present an initial set of indicators for article credibility defined by a diverse coalition of experts. These indicators originate from both within an article's text as well as from external sources or article metadata. As a proof-of-concept, we present a dataset of 40 articles of varying credibility annotated with our indicators by 6 trained annotators using specialized platforms. We discuss future steps including expanding annotation, broadening the set of indicators, and considering their use by platforms and the public, towards the development of interoperable standards for content credibility.This paper is published under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution.
Misinformation about critical issues such as climate change and vaccine safety is oftentimes amplified on online social and search platforms. The crowdsourcing of content credibility assessment by laypeople has been proposed as one strategy to combat misinformation by attempting to replicate the assessments of experts at scale. In this work, we investigate news credibility assessments by crowds versus experts to understand when and how ratings between them differ. We gather a dataset of over 4,000 credibility assessments taken from 2 crowd groups---journalism students and Upwork workers---as well as 2 expert groups---journalists and scientists---on a varied set of 50 news articles related to climate science, a topic with widespread disconnect between public opinion and expert consensus. Examining the ratings, we find differences in performance due to the makeup of the crowd, such as rater demographics and political leaning, as well as the scope of the tasks that the crowd is assigned to rate, such as the genre of the article and partisanship of the publication. Finally, we find differences between expert assessments due to differing expert criteria that journalism versus science experts use---differences that may contribute to crowd discrepancies, but that also suggest a way to reduce the gap by designing crowd tasks tailored to specific expert criteria. From these findings, we outline future research directions to better design crowd processes that are tailored to specific crowds and types of content.
This panel is one of two sessions organized by the AoIR Ethics Working Committee. It collects five papers exploring a broad (but in many ways common) set of ethical dilemmas faced by researchers engaged with specific platforms such as Reddit, Amazon’s Mechanical Turk, and private messaging platforms. These include: a study of people's online conversations about health matters on Reddit in support of a proposed situated ethics framework for researchers working with publicly available data; an exploration into sourcing practices among Reddit researchers to determine if their sources could be unmasked and located in Reddit archives; a broader systematic review of over 700 research studies that used Reddit data to assess the kinds of analysis and methods researchers are engaging in as well as any ethical considerations that emerge when researching Reddit; a critical examination of the use of Amazon’s Mechanical Turk for academic research; and an investigation into current practices and ethical dilemmas faced when researching closed messaging applications and their users. Taken together, these papers illuminate emerging ethical dilemmas facing researchers when investigating novel platforms and user communities; challenges often not fully addressed–if even contemplated–in existing ethical guidelines. These papers are among those under consideration for publication in a special issue of the Journal of Information, Communication and Ethics in Society associated with the AoIR Ethics Working Committee and AoIR2021.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.