2020
DOI: 10.1145/3415164
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Differences in Crowdsourced News Credibility Assessment

Abstract: Misinformation about critical issues such as climate change and vaccine safety is oftentimes amplified on online social and search platforms. The crowdsourcing of content credibility assessment by laypeople has been proposed as one strategy to combat misinformation by attempting to replicate the assessments of experts at scale. In this work, we investigate news credibility assessments by crowds versus experts to understand when and how ratings between them differ. We gather a dataset of over 4,000 credibility … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(41 citation statements)
references
References 30 publications
0
32
0
Order By: Relevance
“…A particular strength of our approach involves stimulus selection. Past research has demonstrated that laypeople can discern truth from falsehoods on stimulus sets curated by experimenters (25,27); for example, Bhuiyan et al (28) found that laypeople ratings were highly correlated with expert ratings for claims about scientific topics that had a "high degree of consensus among domain experts, as opposed to political topics in which the potential for stable ground truth is much more challenging." However, the crowd's performance will obviously depend on the particular statements that participants are asked to evaluate.…”
Section: Introductionmentioning
confidence: 99%
“…A particular strength of our approach involves stimulus selection. Past research has demonstrated that laypeople can discern truth from falsehoods on stimulus sets curated by experimenters (25,27); for example, Bhuiyan et al (28) found that laypeople ratings were highly correlated with expert ratings for claims about scientific topics that had a "high degree of consensus among domain experts, as opposed to political topics in which the potential for stable ground truth is much more challenging." However, the crowd's performance will obviously depend on the particular statements that participants are asked to evaluate.…”
Section: Introductionmentioning
confidence: 99%
“…Considering different groups of annotators (or samples of articles) may lead to different annotations and, consequently a possible different assessment of the reliability for the same news source. Work in (Bhuiyan et al 2020) highlights that among groups of experienced annotators with different backgrounds (i.e., journalists and scientists), there is no perfect agreement regarding the credibility assessment of news. Therefore, it is reasonable to think that in case organizations a la GDI and NG commission the analysis to people with different skills, the result of the evaluation changes even if the criteria find a perfect conceptual mapping.…”
Section: Discussionmentioning
confidence: 99%
“…In order to lay a solid foundation for automating the process, in this paper we present an assessment of the procedures and outcomes of the GDI and NewsGuard evaluation methodology, ran over the same set of news media. An assessment of this kind acquires considerable importance if we consider that, in the literature, researchers have conducted analysis in the field of misinformation by leveraging the tags of news sources assigned by such organizations, see, e.g., (Aker, Vincentius, and Bontcheva 2019;Grinberg et al 2019;Shao et al 2018;Mattei et al 2022;Caldarelli et al 2021). Despite the wide use of this approach, no one, to the best of our knowledge, has so far measured the agreement between different evaluation processes (agreement measured both in terms of criteria adopted and in terms of final scores given to the news sources).…”
Section: Introductionmentioning
confidence: 99%
“…Further research involves the enlargement of the annotated corpus, focusing on the indicators that demonstrate to be the most relevant in the present research. Additional news sources will be automatically crawled from the Portuguese Web Archive 3 , and the annotation will involve the participation of common news readers, adopting eventually strategies like the ones reported in [4].…”
Section: Limitations and Future Workmentioning
confidence: 99%