2014
DOI: 10.1007/978-3-319-13647-9_21
|View full text |Cite
|
Sign up to set email alerts
|

Aggressive Text Detection for Cyberbullying

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
28
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(28 citation statements)
references
References 7 publications
0
28
0
Order By: Relevance
“…Over the last decade, the body of literature on automated detection of cyberbullying has been growing, especially on the topic of detecting cyberbullying from social media networks like Twitter [7]- [13], Instagram [10], [11], [14]- [16] and YouTube [17]- [19]. This body of research has been working towards automated cyberbullying detection using either rulebased models [12], [19], [20], conventional machine learning models [16], [18], [19], [21], or deep learning models [13], [21]- [23].…”
Section: Introductionmentioning
confidence: 99%
“…Over the last decade, the body of literature on automated detection of cyberbullying has been growing, especially on the topic of detecting cyberbullying from social media networks like Twitter [7]- [13], Instagram [10], [11], [14]- [16] and YouTube [17]- [19]. This body of research has been working towards automated cyberbullying detection using either rulebased models [12], [19], [20], conventional machine learning models [16], [18], [19], [21], or deep learning models [13], [21]- [23].…”
Section: Introductionmentioning
confidence: 99%
“…To a lesser extent, researchers have analyzed the content of tweets such as sentiment (Choi et al, 2014; Resnik, Bellmore, Xu, & Zhu, 2016), hashtags (Calvin et al, 2015), and the role of the audience or online bystanders (Cocea, 2016). Previous Twitter research suggests that each approach to text analysis includes both strengths and weakness, for example, machine learning faces challenges when coding slang and “informal text” (Del Bosque & Garza, 2014), while human analysis might include personal bias due to individual experience. Recognizing this, the current study included both Linguistic Inquiry and Word Count (LIWC) software–generated analysis as well as human coding and analysis.…”
mentioning
confidence: 99%
“…The detection of aggressive text messages was performed using the approach proposed by Del Bosque and Garza [9], which concerns an unsupervised lexiconbased, term-counting strategy that identifies profane words. In summary, this approach, given a set M of messages, assigns a score sc i to each message m i .…”
Section: Aggressive Text Message Detectionmentioning
confidence: 99%
“…In that sense, a user that sends messages with a particular frequency and aggressiveness score is considered as an alleged aggressor or bully, while a user that receives messages with a particular frequency and aggressiveness score is considered as an alleged victim. For this case, that particular frequency is two or more messages within M (considering repetition) and that particular aggressiveness score is sc i ≥ 5 (considering this is the middle point of the scale used by Del Bosque and Garza [9]).…”
Section: Alleged Aggressor and Victim Detectionmentioning
confidence: 99%
See 1 more Smart Citation