2022
DOI: 10.1016/j.csl.2022.101386
|View full text |Cite
|
Sign up to set email alerts
|

Hate speech and offensive language detection in Dravidian languages using deep ensemble framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(10 citation statements)
references
References 23 publications
0
10
0
Order By: Relevance
“…Subsequently, Facebook has been another substantial source (Bilal et al, 2022; MacAvaney et al, 2019; Mozafari et al, 2022; Rodriguez et al, 2022; Sreelakshmi et al, 2020). Additionally, alternative platforms such as YouTube comments have been utilized (Kumar Roy et al, 2022; Roy et al, 2022; Sajid et al, 2020), along with resources like Wikipedia (Beddiar et al, 2021) and diverse online platforms (Alatawi et al, 2021; Beddiar et al, 2021). These data sources have been meticulously annotated to construct datasets for the explicit purpose of generating models for the categorization of hate speech.…”
Section: Discussionmentioning
confidence: 99%
“…Subsequently, Facebook has been another substantial source (Bilal et al, 2022; MacAvaney et al, 2019; Mozafari et al, 2022; Rodriguez et al, 2022; Sreelakshmi et al, 2020). Additionally, alternative platforms such as YouTube comments have been utilized (Kumar Roy et al, 2022; Roy et al, 2022; Sajid et al, 2020), along with resources like Wikipedia (Beddiar et al, 2021) and diverse online platforms (Alatawi et al, 2021; Beddiar et al, 2021). These data sources have been meticulously annotated to construct datasets for the explicit purpose of generating models for the categorization of hate speech.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, subtle forms of toxic content, like sarcasm or memes that target specific groups, can be particularly challenging to detect. Therefore, recent advances in applying transformer-based models to identify toxicity show how specific feature combination strategies [54] and ensemble models [55] achieve promising results. Finally, researchers evaluated the ability of Generative Pretrained Transformers (GPTs) to create synthetic datasets which can serve as input for deep learning architectures [56].…”
Section: ML For Toxicity Identificationmentioning
confidence: 99%
“…Computer science highlighted that the difficulties in defining OHS affect the process of detecting this content where deep learning, machine learning (Bhawal, et al, 2021; Roy et al, 2020), and annotators play an important role. The use of code-mixed language (Roy et al, 2022 ) and undesired biases (Velankar et al, 2022) make OHS detection challenging. By focussing on context-dependent factors, these contributions failed to consider the moral judgement process affecting human work when annotators manually detect and label OHS (Velankar, et al, 2022).…”
Section: Theoretical Frameworkmentioning
confidence: 99%