2022
DOI: 10.1007/s13278-022-00993-7
|View full text |Cite|
|
Sign up to set email alerts
|

Misogynoir: challenges in detecting intersectional hate

Abstract: Abstract“Misogynoir” is a term that refers to the anti-Black forms of misogyny that Black women experience. To explore how current automated hate speech detection approaches perform in detecting this type of hate, we evaluated the performance of two state-of-the-art detection tools, HateSonar and Google’s Perspective API, on a balanced dataset of 300 tweets, half of which are examples of misogynoir and half of which are examples of supporting Black women and an imbalanced dataset of 3138 tweets of which 162 tw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…Building on algorithms that extract personality profiles from online activity ( 23 , 24 ), there are now AI tools that attempt to automatically score individual social media users on traits such as sexism or aggression ( 25–27 ), which can then be used to form a moral impression of these users. AI moral scoring also fuels large-scale social engineering projects such as the Chinese social credit system, in which a wide range of behaviors are aggregated into a single score for every citizen, with social and legal penalties for citizens who drop beneath a certain score.…”
Section: Introductionmentioning
confidence: 99%
“…Building on algorithms that extract personality profiles from online activity ( 23 , 24 ), there are now AI tools that attempt to automatically score individual social media users on traits such as sexism or aggression ( 25–27 ), which can then be used to form a moral impression of these users. AI moral scoring also fuels large-scale social engineering projects such as the Chinese social credit system, in which a wide range of behaviors are aggregated into a single score for every citizen, with social and legal penalties for citizens who drop beneath a certain score.…”
Section: Introductionmentioning
confidence: 99%
“…The analysis of sentiment and detection of biases enabled the development of tools to counteract biased reporting, enhancing the objectivity of news dissemination [30,31,32]. LLMs' ability to process and interpret natural language allowed for the identification of subtle emotional undertones that could influence public perception [33,34,35]. The application of LLMs in sentiment analysis facilitated the assessment of the overall tone of news articles, providing insights into the potential biases present in the content [36,37].…”
Section: Sentiment Analysis and Bias Detectionmentioning
confidence: 99%
“…Content moderation systems generally focus on defining policies to protect any identity group or individual targeted [13]. Nevertheless, the specific sociolinguistic aspects of harmful expressions [14,15] make this phenomenon different for each target. A system focused on recognising hate directed to a specific group would not generalise to a different identity [2].…”
Section: Related Workmentioning
confidence: 99%