2021
DOI: 10.54501/jots.v1i1.15
|View full text |Cite
|
Sign up to set email alerts
|

Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking

Abstract: Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

7
67
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(74 citation statements)
references
References 35 publications
7
67
0
Order By: Relevance
“…Few estimates are exempt of these limitations. Recently, Godel et al (2021), developed an algorithm selecting the most popular news from low-quality sources and had participants rating these news stories 72 hours after their publication at most. This overcomes the self-selection of news bias and reduces the delay between circulation and evaluation.…”
Section: Authorshipmentioning
confidence: 99%
“…Few estimates are exempt of these limitations. Recently, Godel et al (2021), developed an algorithm selecting the most popular news from low-quality sources and had participants rating these news stories 72 hours after their publication at most. This overcomes the self-selection of news bias and reduces the delay between circulation and evaluation.…”
Section: Authorshipmentioning
confidence: 99%
“…This delegates questions of facts to experts while reserving matters of subjective judgment to the more representative citizen panel. 3 The quality of lay raters' judgments of the holistic attribute of misinformation can be assessed in part by using the judgments of expert raters as a benchmark [3,6,27,51,52]. Previous studies indicate that panels of lay raters can perform reasonably well against this benchmark on published news articles, suggesting that this challenge may not be insurmountable for most articles [3,51].…”
Section: Conceptual Challengesmentioning
confidence: 99%
“…The subset were selected based on a criterion that their headline or lede included a factual claim. The other 165 articles consisted of the most popular article each day from each of five categories: liberal mainstream news; conservative mainstream news; liberal low-quality news; conservative low-quality news; and low-quality news sites with no clear political orientation [27]. Five articles were selected on each of 33 days between November 13, 2019 and February 6, 2020.…”
Section: Article Setmentioning
confidence: 99%
See 1 more Smart Citation
“…Fact-checking by crowdsourcing exploits the wisdom of crowds. It is reported that their judgment has a high correlation with professional judgment in fact-checking (Epstein, Pennycook, and Rand 2020), although not as accurate as professionals yet (Godel et al 2021). While fack-checking by crowdsourcing is a promising complementary approach to trained fact-checkers, it still requires a great amount of time and resources if the crowd needs to verify a large amount of information.…”
Section: Introductionmentioning
confidence: 99%