2016
DOI: 10.5120/ijca2016910408
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Monitoring and Prevention of Cyberbullying

Abstract: The digital age has given rise to a new form of bullying, termed cyberbullying. A majority of teens use some sort of social media service, thus leading to cyber bullying becoming quite rampant and in some extreme cases, also resulting in victim suicides. In this paper, we aim to show the results of the system we designed for the automatic monitoring and prevention of cyberbullying. The response grading system takes into account the severity of bullying and gives appropriate responses.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 4 publications
0
13
0
Order By: Relevance
“…The remaining 17 studies were ambiguous when providing details to annotators for labeling purposes or when determining whether annotators were experts in the domain of cyberbullying. From the five studies (Bayzick et al, 2011;Hosseinmardi et al, 2015;Ptaszynski et al, 2018;Sugandhi et al, 2016;Van Hee et al, 2015) that provided some instruction, the annotators were given definitions of cyberbullying and/or given context to the content they were labeling. Rosa et al (2019) also found that annotators for cyberbullying datasets, when available, were frequently students or random individuals on MTurk without specific qualifications.…”
Section: Annotation Of Existing Cyberbullying Datasetsmentioning
confidence: 99%
“…The remaining 17 studies were ambiguous when providing details to annotators for labeling purposes or when determining whether annotators were experts in the domain of cyberbullying. From the five studies (Bayzick et al, 2011;Hosseinmardi et al, 2015;Ptaszynski et al, 2018;Sugandhi et al, 2016;Van Hee et al, 2015) that provided some instruction, the annotators were given definitions of cyberbullying and/or given context to the content they were labeling. Rosa et al (2019) also found that annotators for cyberbullying datasets, when available, were frequently students or random individuals on MTurk without specific qualifications.…”
Section: Annotation Of Existing Cyberbullying Datasetsmentioning
confidence: 99%
“…It gives details on cybercrime, the need for social media monitoring as a way of curbing cyber and related crime. More so, [10] created a response grading system that mapped bullying on Twitter to high, medium and low. It pops up a message when it predicts a bully message is about to be sent.…”
Section: Related Workmentioning
confidence: 99%
“…In most studies to date, annotators labeled individual messages instead of message threads, ignoring social context altogether (Al-garadi, Varathan, and Ravana 2016;Huang, Singh, and Atrey 2014;Nahar et al 2014;Reynolds, Kontostathis, and Edwards 2011;Singh, Huang, and Atrey 2016;Sugandhi et al 2016). Only three of the papers that we reviewed incorporated social context in the annotation process.…”
Section: Existing Sources Of Cyberbullying Datamentioning
confidence: 99%