Companion of the the Web Conference 2018 on the Web Conference 2018 - WWW '18 2018
DOI: 10.1145/3184558.3188731
|View full text |Cite
|
Sign up to set email alerts
|

A Structured Response to Misinformation

Abstract: The proliferation of misinformation in online news and its amplification by platforms are a growing concern, leading to numerous efforts to improve the detection of and response to misinformation. Given the variety of approaches, collective agreement on the indicators that signify credible content could allow for greater collaboration and data-sharing across initiatives. In this paper, we present an initial set of indicators for article credibility defined by a diverse coalition of experts. These indicators or… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
43
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 84 publications
(45 citation statements)
references
References 30 publications
2
43
0
Order By: Relevance
“…Crowdworkers and domain experts may disagree on classification tasks that require special knowledge [46]. However, Zhang et al [51] found that non-expert judgments about the characteristics of statements in news articles, such as whether a claim was well supported by the evidence provided, showed high inter-annotator agreement and high correlation with expert judgments. In the context of our study, this suggests that crowdworkers may find it relatively easier to provide reasons for citations than to decide which sentences require them in the first place.…”
Section: Roundmentioning
confidence: 99%
“…Crowdworkers and domain experts may disagree on classification tasks that require special knowledge [46]. However, Zhang et al [51] found that non-expert judgments about the characteristics of statements in news articles, such as whether a claim was well supported by the evidence provided, showed high inter-annotator agreement and high correlation with expert judgments. In the context of our study, this suggests that crowdworkers may find it relatively easier to provide reasons for citations than to decide which sentences require them in the first place.…”
Section: Roundmentioning
confidence: 99%
“…A quotee can be an unnamed scientist or an unnamed study if the person or article being quoted is not disclosed (e.g., "researchers believe, " "most scientists think" and other so-called "weasel" words). Sources that are not specifically attributed such as these ones are as a general rule considered less credible than sources in which the quotee is named [62].…”
Section: 12mentioning
confidence: 98%
“…Recent work has demonstrated methods to automate the extraction of signals or indicators of article quality. These indicators are either expressed at a conceptual level [58] (e.g, balance of view points, respect of personal rights) or operationalized as features that can be computed from an article [62] (e.g., expert quotes or citations). Shu et al [55] describe an approach for detecting fake news on social media based on social and content indicators.…”
Section: Evaluation Of Quality Of Newsmentioning
confidence: 99%
See 1 more Smart Citation
“…A few early studies tried to detect fake news based on linguistic features extracted from the text of news stories [20], [21], [27]. Recent studies have also shown that social networking features play a very important role in detecting fake news [24].…”
Section: Related Workmentioning
confidence: 99%