Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3463029
|View full text |Cite
|
Sign up to set email alerts
|

Does BERT Pay Attention to Cyberbullying?

Abstract: Social media have brought threats like cyberbullying, which can lead to stress, anxiety, depression and in some severe cases, suicide attempts. Detecting cyberbullying can help to warn/ block bullies and provide support to victims. However, very few studies have used self-attention-based language models like BERT for cyberbullying detection and they typically only report BERT's performance without examining in depth the reasons for its performance. In this work, we examine the use of BERT for cyberbullying det… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…I also investigate the most important part of speech (POS) tags that BERT relies on for its performance. The results of this work suggest that pre-training BERT results in a syntactical bias that impacts its performance on the task of hate speech detection (Elsafoury et al, 2021b).…”
Section: The Explainability Perspectivementioning
confidence: 89%
“…I also investigate the most important part of speech (POS) tags that BERT relies on for its performance. The results of this work suggest that pre-training BERT results in a syntactical bias that impacts its performance on the task of hate speech detection (Elsafoury et al, 2021b).…”
Section: The Explainability Perspectivementioning
confidence: 89%
“…The attention mechanism on which it is based makes it possible for it to handle long-term dependencies [16,17]. Hence, Transformer-based models have gained increased attention in HS detection and classification [12,15,18,19]. Table 2 compares some of the methods that have been employed for automatic HS detection in the literature.…”
Section: Related Workmentioning
confidence: 99%
“…Despite such systems demonstrating high accuracy in detecting harmful content, they are not near-perfect and often lack consideration for children's viewpoints in their design. Notably, studies focusing on explainable filtering of online bullying text content have made strides in computational research [15], [16], but fall short in incorporating children's perspectives to enhance the transparency of the AI filtering system's decision-making process.…”
Section: Ai Algorithmic Opaquenessmentioning
confidence: 99%