2023
DOI: 10.5210/spir.v2022i0.13100
|View full text |Cite
|
Sign up to set email alerts
|

Examining the Effectiveness of Artificial Intelligence-Based Cyberbullying Moderation on Online Platforms: Transparency Implications

Abstract: Cyberbullying remains a significant problem for children that appears to have been exacerbated by Covid-19 related lockdowns, which moved a lot of children's offline activities online. Transparency reports shared by social network and gaming platform companies indicate increased take-downs of offensive and harmful comments, posts or content by artificially intelligent (AI) tools. Nonetheless, little is known about how such tools are designed and developed, what data they are trained on; and how this is done in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The COVID-19 pandemic and subsequent transition to online social activities have notably increased cyberbullying risks among children. Despite significant advancements in AI-driven content moderation on social media and gaming platforms, the lack of transparency concerning these AI tools' development, deployment, and the datasets used for training marks a critical need for more accessible AI solutions to effectively combat cyberbullying [31].…”
Section: Related Workmentioning
confidence: 99%
“…The COVID-19 pandemic and subsequent transition to online social activities have notably increased cyberbullying risks among children. Despite significant advancements in AI-driven content moderation on social media and gaming platforms, the lack of transparency concerning these AI tools' development, deployment, and the datasets used for training marks a critical need for more accessible AI solutions to effectively combat cyberbullying [31].…”
Section: Related Workmentioning
confidence: 99%
“…AI techniques leveraged by social media platforms to combat harmful online content remain somewhat opaque. However, insights from strides in computational research offer potential strategies utilised by such platforms [8]. Competitions like those by Semantic Evaluation 1 (SemEval) [9]- [14], have advanced the development of systems capable of identifying various online harms.…”
Section: Ai Algorithmic Opaquenessmentioning
confidence: 99%