2022
DOI: 10.1007/s42380-022-00117-x
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Intelligence to Address Cyberbullying, Harassment and Abuse: New Directions in the Midst of Complexity

Abstract: This brief article serves as an introductory piece for the special issue “The Use of Artificial Intelligence (AI) to Address Online Bullying and Abuse.” It provides an overview of the state of the art with respect to the use of AI in addressing various types of online abuse and cyberbullying; current challenges for the field; and it emphasises the need for greater interdisciplinary collaboration on this topic. The article also summarises key contributions of the articles selected for the special issue.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 17 publications
(9 reference statements)
0
5
0
Order By: Relevance
“…Users can report cyberbullying to platforms first (reactive moderation), but AI is also increasingly used to crawl/screen content before it is reported to platforms in an effort of proactive moderation. This process is detailed in some of the large companies' Transparency Reports, which show the amount or percentage of bullying content that was detected and removed proactively (Milosevic, Van Royen, & Davis, 2022).…”
Section: Background On Cyberbullying Interventions and Ai On Social M...mentioning
confidence: 99%
See 1 more Smart Citation
“…Users can report cyberbullying to platforms first (reactive moderation), but AI is also increasingly used to crawl/screen content before it is reported to platforms in an effort of proactive moderation. This process is detailed in some of the large companies' Transparency Reports, which show the amount or percentage of bullying content that was detected and removed proactively (Milosevic, Van Royen, & Davis, 2022).…”
Section: Background On Cyberbullying Interventions and Ai On Social M...mentioning
confidence: 99%
“…Previous research, however, has not solicited children’s views as to the desirability, perceived effectiveness, and privacy impacts of such interventions, which is what we set out to do in our research. Moreover, we also solicit children’s views on proactive content monitoring or screening in direct messages versus on publicly shared content; and the use of facial recognition for cyberbullying detection, all of which are technically feasible on social media platforms even when there is little clarity from the platforms themselves as to whether and how specifically they are implemented (Gorwa et al, 2020; Milosevic, Van Royen, & Davis, 2022; Verma et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Concerning available resources, most of the proposed studies rely on "easy to access" data (Facebook, YouTube or Instagram) annotated using binary schemes (e.g. content or behaviours categories as abusive or not abusive) (Milosevic, Van Royen, and Davis, 2022). Whilst Van Hee et al (2015) introduce fine-grained annotation guidelines to improve cyberbullying detection, few datasets were released using a more nuanced description of the type of abuse involved, or the severity of the case; or roles played by those involved.…”
Section: Related Workmentioning
confidence: 99%
“…However, the efficacy of these provisions is limited. Social media platforms cannot moderate all toxic comments (Mohammad et al, 2016 ; Cho, 2017 ; Badjatiya et al, 2019 ; Sap et al, 2019 ; Milosevic et al, 2022 ). Linguistic filtering approaches can accidentally result in the biased treatment of discriminated minorities (Badjatiya et al, 2019 ; Sap et al, 2019 ).…”
Section: Introductionmentioning
confidence: 99%
“…Linguistic filtering approaches can accidentally result in the biased treatment of discriminated minorities (Badjatiya et al, 2019 ; Sap et al, 2019 ). Removing toxic comments according to the terms of service only removes explicit expressions; therefore, ambiguous and/or cloaked blatant expressions tend to remain on platforms (Mohammad et al, 2016 ; Cho, 2017 ; Milosevic et al, 2022 ). A study revealed that banning the accounts of offenders would not alter their thinking (Johnson et al, 2019 ).…”
Section: Introductionmentioning
confidence: 99%