2022
DOI: 10.1016/j.ipm.2021.102760
|View full text |Cite
|
Sign up to set email alerts
|

Ceasing hate with MoH: Hate Speech Detection in Hindi–English code-switched language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(15 citation statements)
references
References 31 publications
0
15
0
Order By: Relevance
“…In the year 2022, Arushi S., et al [21] proposed a study that focuses on identifying hate speech in Hindi-English Code-Switched languages. The authors' research entails experimenting with transformation strategies to obtain an accurate text representation.…”
Section: B Work Done On the Hate Speech Classification In Codementioning
confidence: 99%
“…In the year 2022, Arushi S., et al [21] proposed a study that focuses on identifying hate speech in Hindi-English Code-Switched languages. The authors' research entails experimenting with transformation strategies to obtain an accurate text representation.…”
Section: B Work Done On the Hate Speech Classification In Codementioning
confidence: 99%
“…Changing the order of the words in a sentence might change the meaning of the sentence; therefore, the text augmentation is slightly different from other augmentation techniques. As the Twitter data [19,22] that we have considered from Kaggle was not sufficient, we have increased the dataset size using nlpaug tool. This nlpaug [23] method uses word-embedding techniques and various augmenter strategies such as insertion and substitutions to augment the data on a character level, word level and sentence level.…”
Section: Dataset Preparation and Preprocessingmentioning
confidence: 99%
“…The BERT model is an unsupervised deep bidirectional neural network that implements bidirectional transformer architecture. A BERT-based transfer learning approach has started to be used frequently in hate classification studies, as it leads to increased classification performance and reduced training time [78]. The transfer learning approach also provides ef-fective learning from limited labeled data with a pretrained model.…”
Section: M-bert Modelmentioning
confidence: 99%