2022
DOI: 10.3390/math10040569
|View full text |Cite
|
Sign up to set email alerts
|

MisRoBÆRTa: Transformers versus Misinformation

Abstract: Misinformation is considered a threat to our democratic values and principles. The spread of such content on social media polarizes society and undermines public discourse by distorting public perceptions and generating social unrest while lacking the rigor of traditional journalism. Transformers and transfer learning proved to be state-of-the-art methods for multiple well-known natural language processing tasks. In this paper, we propose MisRoBÆRTa, a novel transformer-based deep neural ensemble architecture … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 45 publications
(73 reference statements)
2
16
0
Order By: Relevance
“…Moreover, the transformer embeddings obtained the best results among the document embeddings experiments as they manage to encode and preserve the context within the vector representation. When compared to the state-of-the-art model MisRoBAERTa [23], the BiLSTM with BART obtained similar results, while the BiGRU with BART marginally outperformed the model with a 0.02% difference in accuracy. We hypothesize that this difference in performance is due to the use of pre-trained transformers instead of fine-tuned versions.…”
Section: Fake News Detectionmentioning
confidence: 88%
See 4 more Smart Citations
“…Moreover, the transformer embeddings obtained the best results among the document embeddings experiments as they manage to encode and preserve the context within the vector representation. When compared to the state-of-the-art model MisRoBAERTa [23], the BiLSTM with BART obtained similar results, while the BiGRU with BART marginally outperformed the model with a 0.02% difference in accuracy. We hypothesize that this difference in performance is due to the use of pre-trained transformers instead of fine-tuned versions.…”
Section: Fake News Detectionmentioning
confidence: 88%
“…We employed Keras for implementing the neural models. For comparison, we used the free implementation of MisRoBAERTa [23], made available by the authors on GitHub.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations