2021
DOI: 10.1007/978-3-030-85251-1_19
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF–2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 40 publications
(18 citation statements)
references
References 40 publications
0
18
0
Order By: Relevance
“…The CLEF 2018-2022 CheckThat! lab had a shared task on this (Atanasova et al, , 2019aShaar et al, 2020bShaar et al, , 2021cNakov et al, 2022a).…”
Section: Context Modeling For Factualitymentioning
confidence: 99%
“…The CLEF 2018-2022 CheckThat! lab had a shared task on this (Atanasova et al, , 2019aShaar et al, 2020bShaar et al, , 2021cNakov et al, 2022a).…”
Section: Context Modeling For Factualitymentioning
confidence: 99%
“…The 2020 edition featured three main tasks: detecting previously fact-checked claims, evidence retrieval, and actual fact-checking of claims [9,11]. Similarly, the 2021 edition focused on detecting check-worthy claims, previously fact-checked claims, and fake news [56,57]. Whereas the first editions focused mostly on political debates and speeches, and eventually tweets, the 2021 edition added the verification of news articles.…”
Section: Introductionmentioning
confidence: 99%
“…We have more than 34K annotations about several topics, including COVID-19 and politics, which cover all subtasks 1A-1D[10,57].…”
mentioning
confidence: 99%
“…There are various types of potentially harmful content in social media such as misinformation and fake news [1], aggression [2], cyber-bullying [3,4], pejorative language [5], offensive language [6], online extremism [7], to name a few. The automatic identification of problematic content has been receiving significant attention from the AI and NLP communities.…”
Section: Introductionmentioning
confidence: 99%
“…For example, a recent article pointed out that Facebook does not have technology for identifying hate speech in the 22 official languages of India, its biggest market worldwide. 1 To further contribute to the research in this field, the HASOC 2021 competition contributes with empirically-driven research aiming to find the best methods for the identification of offensive content in social media. In its third edition, HASOC 2021 features re-runs of English and Hindi tasks allowing for better comparison with the results from the editions HASOC 2019 [15] and HAOSC 2020 [16].…”
Section: Introductionmentioning
confidence: 99%