2021
DOI: 10.1609/icwsm.v5i3.14209
|View full text |Cite
|
Sign up to set email alerts
|

Modeling the Detection of Textual Cyberbullying

Abstract: The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection prob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 173 publications
(54 citation statements)
references
References 11 publications
0
53
0
1
Order By: Relevance
“…It is becoming increasingly clear that these platforms open space for contents and conversations which often become inappropriate and hurtful towards the online communities that these systems are supporting. This negative phenomenon is further supported by a recent statement by the Facebook VP of product management saying that the Facebook community "submit tens of millions of reports a week about potentially objectionable contents" 1 ; and it is becoming more urgent since children and adolescents easily access to social media but they are more susceptible to these contents and more exposed to threats such as cyberbullying (Dinakar, Reichart, and Lieberman 2011), online harassment or sexting (O'Keeffe, Clarke-Pearson, and others 2011).…”
Section: Introductionmentioning
confidence: 95%
“…It is becoming increasingly clear that these platforms open space for contents and conversations which often become inappropriate and hurtful towards the online communities that these systems are supporting. This negative phenomenon is further supported by a recent statement by the Facebook VP of product management saying that the Facebook community "submit tens of millions of reports a week about potentially objectionable contents" 1 ; and it is becoming more urgent since children and adolescents easily access to social media but they are more susceptible to these contents and more exposed to threats such as cyberbullying (Dinakar, Reichart, and Lieberman 2011), online harassment or sexting (O'Keeffe, Clarke-Pearson, and others 2011).…”
Section: Introductionmentioning
confidence: 95%
“…Given the large-scale datasets required for robust analysis, this task can prove to be labor-intensive and daunting. Consequently, many researchers resort to hiring a group of annotators (Schenk, Guittard et al 2009), or use crowdsourcing tactics (Hosseinmardi et al 2015;Rafiq et al 2015) to manually code the data based on a pre-defined codebook that attempts to describe the phenomena of interest (Dinakar, Reichart, and Lieberman 2011;Dadvar et al 2013;Singh, Ghosh, and Jose 2017). This general practice may be appropriate where the classification task is straightforward, such as identifying specific objects in images.…”
Section: The Use Of Third-party Annotations For Establishing Ground T...mentioning
confidence: 99%
“…Yet, most published cyberbullying detection systems continue to utilize the practice of using third-party ("outsider") annotators in their research. For instance, Dinakar et al investigated the use of classification to detect cyberbullying comments on YouTube (Dinakar, Reichart, and Lieberman 2011). They had two annotators, one of which was a youth educator, label data related to sexuality, race, and intelligence.…”
Section: The Use Of Third-party Annotations For Establishing Ground T...mentioning
confidence: 99%
See 1 more Smart Citation
“…In the field of computational linguistics, related work for the detection of textual cyberbullying has involved the use of statistical supervised machine learning topic classification to detect sensitive topics such as sexuality, race and culture, socio-normative conflicts, physical appearance and intelligence [5]. The Genesis project has investigated the understanding of automatic plot…”
Section: Related Workmentioning
confidence: 99%