Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1003
|View full text |Cite
|
Sign up to set email alerts
|

DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning

Abstract: Misinformation such as fake news is one of the big challenges of our society. Research on automated fact-checking has proposed methods based on supervised learning, but these approaches do not consider external evidence apart from labeled training instances. Recent approaches counter this deficit by considering external sources related to a claim. However, these methods require substantial feature modeling and rich lexicons. This paper overcomes these limitations of prior work with an end-toend model for evide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
191
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 223 publications
(214 citation statements)
references
References 35 publications
2
191
0
1
Order By: Relevance
“…A [47], who build a framework to classify true and false claims, and also provide self-evidence for the credibility assessment. They evaluate their model against some state-of-the-art techniques on different collections of news articles (cf.…”
Section: Content-basedmentioning
confidence: 99%
See 1 more Smart Citation
“…A [47], who build a framework to classify true and false claims, and also provide self-evidence for the credibility assessment. They evaluate their model against some state-of-the-art techniques on different collections of news articles (cf.…”
Section: Content-basedmentioning
confidence: 99%
“…They also apply the Elaboration Likelihood Model[41] to news categories, and suggest that consuming false news requires little energy and cognition, making them more appealing to the readers. A neural network model is also presented byPopat et al (2018)…”
mentioning
confidence: 99%
“…Given a KG K and a claim f , several approaches have been developed to estimate if f is a valid claim in K. In some of these methods, facts in the KG are leveraged to create features, such as paths [20,4] or embeddings [2,22], which are then used by classifiers to label as true or false a given test claim. Other methods rely on searching for occurrences of the given claim on Web pages [5,18]. However, such models are based on Machine Learning (ML) classifiers that in the best case can report the source of evidence for a decision but lack the ability to provide comprehensible descriptions of how a decision has been taken for a given claim.…”
Section: Introductionmentioning
confidence: 99%
“…For these reasons, in many cases, rules cannot be triggered. We identify these cases and resort to mining Web pages to get evidence for missing facts that are crucial to reach a decision for a claim [18].…”
Section: Introductionmentioning
confidence: 99%
“…To fight against fake news, many fact-checking systems ranging from human-based systems (e.g. Snopes.com), classical machine learning frameworks [20,34,38] to deep learning models [29,39,56,57] were developed to determine credibility of online news and information. However, falsified news is still disseminated like wild fire [31,59] despite dramatic rise of fact-checking sites worldwide [21].…”
Section: Introductionmentioning
confidence: 99%