2022
DOI: 10.1007/978-3-031-15931-2_16
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of COVID-19 5G Conspiracy Theory Tweets Using SentenceBERT Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…One of the most popular models (and architectures) employed is BERT, first published in 2018, which has achieved the state of the art for a range of NLP applications, especially classification-oriented tasks. BERT-like pre-trained language models are typically used in recent research to build text classifiers for various text classification tasks, including hate speech (Basile et al 2019, Aluru et al 2020, Mathew et al 2022, offensive language (Wiegand, Siegel and Ruppenhofer 2018, Zampieri et al 2019, Mandl et al 2021 or (pre-specified) conspiracy theories (Pogorelov et al 2020, Moffitt, King and Carley 2021, Elroy and Yosipof 2022, Phillips, Ng and Carley 2022. The majority of these benchmark datasets are in the English language and were drawn primarily from Twitter (cf.…”
Section: Training Of Custom Models To Detect Antisemitic Commentsmentioning
confidence: 99%
“…One of the most popular models (and architectures) employed is BERT, first published in 2018, which has achieved the state of the art for a range of NLP applications, especially classification-oriented tasks. BERT-like pre-trained language models are typically used in recent research to build text classifiers for various text classification tasks, including hate speech (Basile et al 2019, Aluru et al 2020, Mathew et al 2022, offensive language (Wiegand, Siegel and Ruppenhofer 2018, Zampieri et al 2019, Mandl et al 2021 or (pre-specified) conspiracy theories (Pogorelov et al 2020, Moffitt, King and Carley 2021, Elroy and Yosipof 2022, Phillips, Ng and Carley 2022. The majority of these benchmark datasets are in the English language and were drawn primarily from Twitter (cf.…”
Section: Training Of Custom Models To Detect Antisemitic Commentsmentioning
confidence: 99%
“…One of the most popular models (and architectures) employed is BERT, first published in 2018, which has achieved the state of the art for a range of NLP applications, especially classification-oriented tasks. BERT-like pre-trained language models are typically used in recent research to build text classifiers for various text classification tasks, including hate speech (Basile et al 2019, Aluru et al 2020, Mathew et al 2022, offensive language (Wiegand, Siegel and Ruppenhofer 2018, Zampieri et al 2019, Mandl et al 2021 or (pre-specified) conspiracy theories (Pogorelov et al 2020, Moffitt, King and Carley 2021, Elroy and Yosipof 2022, Phillips, Ng and Carley 2022. The majority of these benchmark datasets are in the English language and were drawn primarily from Twitter (cf.…”
Section: Training Of Custom Models To Detect Antisemitic Commentsmentioning
confidence: 99%
“…However, with the passage of time, even the most publicised and newsworthy events disappear from public memory or even become a subject of misinformation (Elroy, Erokhin, Komendantova, & Yosipof, 2023;Elroy & Yosipof, 2022;Erokhin, Yosipof, & Komendantova, 2022). Features of human cognition lead to reliance on shortcuts, to cognitive biases which may lead to the creation of a subjective reality about the event, its causes and consequences.…”
Section: Introductionmentioning
confidence: 99%