2020
DOI: 10.48550/arxiv.2010.14952
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Ethics by Design in Online Abusive Content Detection

Abstract: To support safety and inclusion in online communications, significant efforts in NLP research have been put towards addressing the problem of abusive content detection, commonly defined as a supervised classification task. The research effort has spread out across several closely related sub-areas, such as detection of hate speech, toxicity, cyberbullying, etc. There is a pressing need to consolidate the field under a common framework for task formulation, dataset design and performance evaluation. Further, de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 53 publications
0
1
0
Order By: Relevance
“…Vidgen et al (2019) extended this topology to three dimensions: (1) whether the abuse is directed towards an individual, an identity (based on belonging to a demographic category, social group, or organization), or concept (such as a belief system, country or ideology), (2) who receives the abuse (e.g., which identity group, moderators vs. content producers, friends vs. strangers), and (3) how abuse is articulated (e.g., aggression, insults, stereotyping; explicit vs. implicit). Kiritchenko and Nejadgholi (2020) suggested a two-dimensional multi-level classification structure that includes a hierarchical schema for subject matter of an utterance (or target of abuse) and fine-grained severity of abuse explicitly annotated through comparative methods. Sap et al (2020) framed offensive language detection as a hierarchical task that combines structured classification with reasoning on social implications.…”
Section: Moving Away From Coarse-grained Definitions Of Abusementioning
confidence: 99%
“…Vidgen et al (2019) extended this topology to three dimensions: (1) whether the abuse is directed towards an individual, an identity (based on belonging to a demographic category, social group, or organization), or concept (such as a belief system, country or ideology), (2) who receives the abuse (e.g., which identity group, moderators vs. content producers, friends vs. strangers), and (3) how abuse is articulated (e.g., aggression, insults, stereotyping; explicit vs. implicit). Kiritchenko and Nejadgholi (2020) suggested a two-dimensional multi-level classification structure that includes a hierarchical schema for subject matter of an utterance (or target of abuse) and fine-grained severity of abuse explicitly annotated through comparative methods. Sap et al (2020) framed offensive language detection as a hierarchical task that combines structured classification with reasoning on social implications.…”
Section: Moving Away From Coarse-grained Definitions Of Abusementioning
confidence: 99%