Proceedings of the Conference Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Me 2021
DOI: 10.26615/978-954-452-072-4_168
|View full text |Cite
|
Sign up to set email alerts
|

“Don’t discuss”: Investigating Semantic and Argumentative Features for Supervised Propagandist Message Detection and Classification

Abstract: One of the mechanisms through which disinformation is spreading online, in particular through social media, is by employing propaganda techniques. These include specific rhetorical and psychological strategies, ranging from leveraging on emotions to exploiting logical fallacies. In this paper, our goal is to push forward research on propaganda detection based on text analysis, given the crucial role these methods may play to address this main societal issue. More precisely, we propose a supervised approach to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…We cast this task as a sentence-span classification problem and we address it relying on a transformer architecture. Results reach SoTA systems performances on the tasks of propaganda detection and classification (for a comparison with SoTA algorithms, we refer to (Vorakitphan et al, 2021)).…”
Section: Introductionmentioning
confidence: 77%
“…We cast this task as a sentence-span classification problem and we address it relying on a transformer architecture. Results reach SoTA systems performances on the tasks of propaganda detection and classification (for a comparison with SoTA algorithms, we refer to (Vorakitphan et al, 2021)).…”
Section: Introductionmentioning
confidence: 77%
“…Consequently, distinct losses are computed for each model: fallacy loss (loss f al ), component loss (loss cmp ), relation loss (loss rel ), and part-of-speech loss (loss pos ). These individual losses are combined by multiplying them with an arbitrary α value of 0.1, yielding a unified average loss referred to as the joint loss (Vorakitphan et al, 2021). In our study, we opted for empirically investigating the optimal alpha value that yielded superior performance, as evidenced by our experiments (see Appendix D for the exhaustive evaluation).…”
Section: Modelmentioning
confidence: 99%
“…To combine the results from sentence-span based RoBERTa with the feature-based BiLSTM we apply the joint loss strategy proposed in (Vorakitphan et al, 2021). Each model produces a loss per batch using CrossEntropy loss function L. Following the function: loss joint loss = α × (loss sentence +lossspan+loss semantic argumentation features ) N loss where each loss value is produced from CrossEntropy function of its classifier (e.g., loss sentence and loss span from RoBERTa models of sentence and span, loss semantic argumentation features from the BiLSTM model.…”
Section: Protect Architecturementioning
confidence: 99%