2019
DOI: 10.1007/978-3-030-35166-3_41
|View full text |Cite
|
Sign up to set email alerts
|

A New Measure of Polarization in the Annotation of Hate Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(26 citation statements)
references
References 19 publications
1
25
0
Order By: Relevance
“…Kenyon-Dean and colleagues found that over 30% of the instances in the corpus were "controversial" or "complicated" cases about which annotators disagreed. Akhtar et al (2019) experimented with partitioning the annotators in hate speech datasets into clusters reflecting more uniform subjective judgments in order to achieve increased inter-annotator agreement.…”
Section: Sentiment Analysis and Other Subjective Tasksmentioning
confidence: 99%
“…Kenyon-Dean and colleagues found that over 30% of the instances in the corpus were "controversial" or "complicated" cases about which annotators disagreed. Akhtar et al (2019) experimented with partitioning the annotators in hate speech datasets into clusters reflecting more uniform subjective judgments in order to achieve increased inter-annotator agreement.…”
Section: Sentiment Analysis and Other Subjective Tasksmentioning
confidence: 99%
“…Given the current rate of user-generated content produced in every minute, manually monitoring abusive behavior in social media is impractical. Facebook and Twitter also made efforts to eliminate abusive content from their platforms 1 by providing clear policies on hateful conducts 2 , implementing user report mechanisms, and employing content moderators to filter the abusive posting. However, these efforts are not a scalable and longterm solution to this problem.…”
Section: Introductionmentioning
confidence: 99%
“…Disagreement in annotation has been studied from a particular angle when occurring in highly subjective tasks such as offensive and abusive language detection or hate speech detection. Akhtar et al (2019) introduced the polarization index, aiming at measuring a particular form of disagreement stemming from clusters of annotators whose opinions on the subjective phenomenon are polarized, e.g., because of different cultural backgrounds. Specifically, polarization measures the ratio between intragroup and inter-group agreement at the individual instance level, capturing the cases where different groups of annotators strongly agree on different labels.…”
Section: Disagreement On 'Subjective' Tasksmentioning
confidence: 99%
“…Figure 1 shows two examples from CV and NLP. This is particularly true for tasks involving highly subjective judgments, such as hate speech detection (Akhtar et al, 2019(Akhtar et al, , 2020 or sentiment analysis (Kenyon-Dean et al, 2018). However, it is not a trivial issue even in more linguistic tasks, such as part-of-speech tagging (Plank et al, 2014), word sense disambiguation (Passonneau et al, 2012;Jurgens, 2013), or coreference resolution (Poesio and Artstein, 2005;Recasens et al, 2011).…”
Section: Introductionmentioning
confidence: 99%