Proceedings of the Fourteenth Workshop on Semantic Evaluation 2020
DOI: 10.18653/v1/2020.semeval-1.229
|View full text |Cite
|
Sign up to set email alerts
|

JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques Using BERT Pre-trained Model

Abstract: This paper presents the JUST team submission to semeval-2020 task 11, Detection of Propaganda Techniques in News Articles. Knowing that there are two subtasks in this competition, we have participated in the Technique Classification subtask (TC), which aims to identify the propaganda techniques used in specific propaganda fragments. We have used and implemented various models to detect propaganda. Our proposed model is based on BERT uncased pre-trained language model as it has achieved state-of-the-art perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Their evaluation revealed that char n-gram produced the best results when combined with Nela. Likewise, other approaches such as Oliinyk et al (2020) and Altiti, Abdullah & Obiedat (2020) used TF-IDF, POS, lexicons-based, and word vectors for propaganda identification.…”
Section: Related Workmentioning
confidence: 99%
“…Their evaluation revealed that char n-gram produced the best results when combined with Nela. Likewise, other approaches such as Oliinyk et al (2020) and Altiti, Abdullah & Obiedat (2020) used TF-IDF, POS, lexicons-based, and word vectors for propaganda identification.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, it is practicable on a big scale and, at the very least, saves moderators from filtering through unwanted stuff. This automatic fact evaluation focuses on the article's material, claims, and statements rather than information such as the source or rate of dissemination [53][54][55]. Furthermore, the ClaimRank technique identifies claims needing to be verified and refers to fact-checking websites that utilize manual or automated techniques to verify claims [56].…”
Section: Related Workmentioning
confidence: 99%
“…As an example, the CN-HIT-IT.NLP team and ECNU-SenseMaker (Zhao et al, 2020) both used a variant of K-BERT (Liu et al, 2020a) with additional data; the former injects relevant triples from ConceptNet to the language model, while the later also uses ConceptNet's unstructured text to pre-train the language model. Other systems relied on ensemble models consisting of different language models such as RoBERTa and XLNet (Liu, 2020;Altiti et al, 2020).…”
Section: Arxiv:221003378v1 [Cscl] 7 Oct 2022 2 Backgroundmentioning
confidence: 99%