Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2147
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2019 Task 7: RumourEval, Determining Rumour Veracity and Support for Rumours

Abstract: This is the proposal for RumourEval-2019, which will run in early 2019 as part of that year's SemEval event.Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the dangers of "fake news" have become a mainstream concern. Yet automated support for rumour checking remains in its infancy. For this reason, it is important that a shared task in this area continues to provide a focus for effort, which is likely to increase. We therefore propose a continuat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
198
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 175 publications
(224 citation statements)
references
References 25 publications
1
198
0
Order By: Relevance
“…Rumour classification and rumour verification attract significant attention from the research community in shared tasks like RumourEval [8]. According to the type of data used, rumour detection approaches can be divided into three major categories, content-based, feature-based and propagation-based.…”
Section: Related Workmentioning
confidence: 99%
“…Rumour classification and rumour verification attract significant attention from the research community in shared tasks like RumourEval [8]. According to the type of data used, rumour detection approaches can be divided into three major categories, content-based, feature-based and propagation-based.…”
Section: Related Workmentioning
confidence: 99%
“…After cleansing the collection, the final dataset consists of 7.2K sentences provided by 512 unique contributors, for whom demographic data is also collected and made available. The SemEval'17 challenge dataset contains 5.5K crowdsourced annotated claims [11], while the FEVER dataset, with 185K entries, extracts claims from Wikipedia. Semanticspreserving sentence altering techniques are then applied and the resulting claims are annotated by the crowd [25].…”
Section: Use Cases and Exploitationmentioning
confidence: 99%
“…For more details about these two tasks, please check the task description paper from the task organizers (Gorrell et al, 2019).…”
Section: Introductionmentioning
confidence: 99%