Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 2019
DOI: 10.18653/v1/p19-3026
|View full text |Cite
|
Sign up to set email alerts
|

ClaimPortal: Integrated Monitoring, Searching, Checking, and Analytics of Factual Claims on Twitter

Abstract: We present ClaimPortal, a web-based platform for monitoring, searching, checking, and analyzing English factual claims on Twitter. We explain the architecture of ClaimPortal, its components and functions, and the user interface. While the last several years have witnessed a substantial growth in interests and efforts in the area of computational factchecking, ClaimPortal is a novel infrastructure in that fact-checkers have largely skipped factual claims in tweets. It can be a highly powerful tool to both gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…The challenge attracted several models with top submissions [7,32,44] all using some version of transformer-based models like BERT [11] and RoBERTa [24] along with tweet meta-data and lexical features. Outside of CLEF challenges, some works [12,27] have also conducted a detailed study on detecting check-worthy tweets in U.S. politics and proposed real-time systems to monitor and filter them. Taking inspiration from [10], Gupta et al [16] address the limitations of current methods in cross-domain claim detection by proposing a generalized claim detection model called LESA (Linguistic Encapsulation and Semantic Amalgamation).…”
Section: Unimodal Approachesmentioning
confidence: 99%
“…The challenge attracted several models with top submissions [7,32,44] all using some version of transformer-based models like BERT [11] and RoBERTa [24] along with tweet meta-data and lexical features. Outside of CLEF challenges, some works [12,27] have also conducted a detailed study on detecting check-worthy tweets in U.S. politics and proposed real-time systems to monitor and filter them. Taking inspiration from [10], Gupta et al [16] address the limitations of current methods in cross-domain claim detection by proposing a generalized claim detection model called LESA (Linguistic Encapsulation and Semantic Amalgamation).…”
Section: Unimodal Approachesmentioning
confidence: 99%
“…PerspectroScope (Chen et al, 2019) allows users to query a natural language claim and extract textual evidence in support or against the claim. ClaimPortal (Majithia et al, 2019) is an integrated infrastructure for searching and checking factual claims on Twitter. TARGER (Chernodub et al, 2019) is an argument mining framework for tagging arguments in the free input text and keyword-based retrieval of arguments from the argument-tagged corpus.…”
Section: Related Workmentioning
confidence: 99%
“…This textual evidence is key to tasks such as developing new hypotheses, designing informative experiments, or comparing and validating new findings against previous knowledge. While the last several years have witnessed substantial growth in interests and efforts in evidence mining (Lippi and Torroni, 2016;Wachsmuth et al, 2017;Stab et al, 2018;Chen et al, 2019;Majithia et al, 2019;Chernodub et al, 2019;Allot et al, 2019), little work has been done for evidence mining system development in the scientific literature. A significant difference between evidence in the scientific literature and evidence in other corpora (e.g., the online debate corpus) is that scientific evidence usually does not have a strong sentiment (i.e., positive, negative or neutral) in the opinion it holds.…”
Section: Introductionmentioning
confidence: 99%
“…This textual evidence is key to tasks such as developing new hypotheses, designing informative experiments, or comparing and validating new findings against previous knowledge. While the last several years have witnessed substantial growth in interests and efforts in evidence mining (Lippi and Torroni, 2016;Wachsmuth et al, 2017;Stab et al, 2018;Majithia et al, 2019;Chernodub et al, 2019;Allot et al, 2019), little work has been done for evidence mining system development in the scientific literature. A significant difference between evidence in the scientific literature and evidence in other corpora (e.g., the online debate corpus) is that scientific evidence usually does not have a strong sentiment (i.e., positive, negative or neutral) in the opinion it holds.…”
Section: Introductionmentioning
confidence: 99%