2018
DOI: 10.1007/978-3-319-91947-8_6
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Identification and Classification of Misogynistic Language on Twitter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
107
0
5

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 120 publications
(112 citation statements)
references
References 14 publications
0
107
0
5
Order By: Relevance
“…While the problem of online hate speech has been the focus of a wide body of research during the last few years [15], computational approaches targeting the problem of misogyny in particular are scarce and very recent. Computational methods have been either used to observe and study the phenomenon of online misogyny [6,[20][21][22], to generate automatic misogynistic content detection methods [4,12,13], or to use the appearance of misogyny related words in online content as a predictor of criminal behaviour [16].…”
Section: Computational Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…While the problem of online hate speech has been the focus of a wide body of research during the last few years [15], computational approaches targeting the problem of misogyny in particular are scarce and very recent. Computational methods have been either used to observe and study the phenomenon of online misogyny [6,[20][21][22], to generate automatic misogynistic content detection methods [4,12,13], or to use the appearance of misogyny related words in online content as a predictor of criminal behaviour [16].…”
Section: Computational Approachesmentioning
confidence: 99%
“…One step further from observational studies, Maria Anzovino and colleagues [4] focus on the automatic detection and categorisation of misogynous language in social media. They design a taxonomy of manifestations of misogyny that includes five different categories: discredit, stereotype and objectification, sexual harassment and threats of violence, dominance, and derailing.…”
Section: Computational Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…We experiment with Support Vector Machine (SVM), Random Forests (RF), and Logistic Regression (LR). The features explored include TF-IDF on character n-grams (1-5 characters), TF-IDF on word unigrams and bigrams, the mean of the ELMo vectors for the words in a post, and the composite set of features similar to (Anzovino et al, 2018) comprising n-gram based, POS-based, and doc2vec (Le and Mikolov, 2014) features, the post length, and the adjective count. LSTM-based Architectures biLSTM: The word embeddings for all words in a post are fed to bidirectional LSTM.…”
Section: Baselinesmentioning
confidence: 99%
“…While sexism is detected as a category of hate in some of the hate speech classification work (Badjatiya et al, 2017;Waseem and Hovy, 2016), it does not perform sexism classification. Except the work on categorizing sexual harassment by Karlekar and Bansal (2018), the prior work on classifying sexism assumes the categories to be mutually exclusive (Anzovino et al, 2018;Jha and Mamidi, 2017). Moreover, the existing category sets number between 2 to 5.…”
Section: Introductionmentioning
confidence: 99%