2013
DOI: 10.1007/978-3-642-36973-5_62
|View full text |Cite
|
Sign up to set email alerts
|

Improving Cyberbullying Detection with User Context

Abstract: Abstract. The negative consequences of cyberbullying are becoming more alarming every day and technical solutions that allow for taking appropriate action by means of automated detection are still very limited. Up until now, studies on cyberbullying detection have focused on individual comments only, disregarding context such as users' characteristics and profile information. In this paper we show that taking user context into account improves the detection of cyberbullying. IntroductionMore and more teenagers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
271
0
3

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 318 publications
(274 citation statements)
references
References 2 publications
0
271
0
3
Order By: Relevance
“…Obscenity and profanity filtering can be seen as a part of developing content filters (such as parental controls (Weir and Duta, 2012)), cyberbullying detectors (Dadvar et al, 2013) and spam filters (Yoon et al, 2010). Another application of obscenity filtering is found in sentiment analysis, where obscene words are treated as indicators of negative (Ji et al, 2013) or sarcastic reviews (Bamman and Smith, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…Obscenity and profanity filtering can be seen as a part of developing content filters (such as parental controls (Weir and Duta, 2012)), cyberbullying detectors (Dadvar et al, 2013) and spam filters (Yoon et al, 2010). Another application of obscenity filtering is found in sentiment analysis, where obscene words are treated as indicators of negative (Ji et al, 2013) or sarcastic reviews (Bamman and Smith, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…Prior work in abusive language has been rather diffuse as researchers have focused on different aspects ranging from profanity detection to hate speech detection (Warner and Hirschberg, 2012) to cyberbullying (Dadvar et al, 2013) and to abusive language in general (Chen et al., 2012;Djuric et al, 2015b).…”
Section: Related Workmentioning
confidence: 99%
“…Token n-grams are one of the most popular features across many works (Yin et al, 2009; Chen et al, 2012;Warner and Hirschberg, 2012;Xiang et al, 2012;Dadvar et al, 2013). Hand-crafted regular expressions and blacklists also feature prominently in (Yin et al, 2009;Xiang et al, 2012).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Many social studies have been conducted to provide support and training for adults and teenagers [2,3]. The majority of the existing technical studies on cyberbullying have concentrated on the detection of bullying or harassing comments [4][5][6], while there is hardly work on the more challenging task of detecting cyberbullies and studies for this area of research are largely missing. There are few exceptions however, that point out an interesting direction for the incorporation of user information in detecting offensive contents, but more advanced user information or personal characteristics such as writing style or possible network activities has not been included in these studies [7,8].…”
Section: Introductionmentioning
confidence: 99%