Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3449861
|View full text |Cite
|
Sign up to set email alerts
|

The Structure of Toxic Conversations on Twitter

Abstract: Social media platforms promise to enable rich and vibrant conversations online; however, their potential is often hindered by antisocial behaviors. In this paper, we study the relationship between structure and toxicity in conversations on Twitter. We collect 1.18M conversations (58.5M tweets, 4.4M users) prompted by tweets that are posted by or mention major news outlets over one year and candidates who ran in the 2018 US midterm elections over four months. We analyze the conversations at the individual, dyad… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(29 citation statements)
references
References 53 publications
2
27
0
Order By: Relevance
“…User History. Patterns in user behaviour, including daily logins (Balci and Salah, 2015), favourites (Unsvåg and Gambäck, 2018), and posting history (Saveski et al, 2021;Ziems et al, 2020), can be used as features in abusive language detection models. Some work focuses directly on the content of past comments.…”
Section: Methods In Abusive Language Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…User History. Patterns in user behaviour, including daily logins (Balci and Salah, 2015), favourites (Unsvåg and Gambäck, 2018), and posting history (Saveski et al, 2021;Ziems et al, 2020), can be used as features in abusive language detection models. Some work focuses directly on the content of past comments.…”
Section: Methods In Abusive Language Detectionmentioning
confidence: 99%
“…One way to ensure that machine learning frameworks are socially conscientious is to add context around conversations. Past research has explored the contextual information within conversation threads (Pavlopoulos et al, 2020;Ziems et al, 2020), user demographics (Unsvåg andGambäck, 2018;Founta et al, 2019), user history (Saveski et al, 2021;Qian et al, 2018;Dadvar et al, 2013), user profiles (Unsvåg and Gambäck, 2018;Founta et al, 2019), and user networks (Ziems et al, 2020;Mishra et al, 2018) with varying degrees of success in improving performance. However, most modelling efforts for abusive language detection neglect one major aspect of online conversations: the community environment they take place within.…”
Section: Introductionmentioning
confidence: 99%
“…User History. Patterns in user behaviour, including daily logins (Balci and Salah, 2015), favourites (Unsvåg and Gambäck, 2018), and posting history (Saveski et al, 2021;, can be used as features in abusive language detection models. Some work focuses directly on the content of past comments.…”
Section: Methods In Abusive Language Detectionmentioning
confidence: 99%
“…One way to ensure that machine learning frameworks are socially conscientious is to add context around conversations. Past research has explored the contextual information within conversation threads (Pavlopoulos et al, 2020;, user demographics (Unsvåg and Gambäck, 2018; Founta et al, 2019), user history (Saveski et al, 2021;Qian et al, 2018;Dadvar et al, 2013), user profiles (Unsvåg and Gambäck, 2018; Founta et al, 2019), and user networks Mishra et al, 2018) with varying degrees of success in improving performance. However, most modelling efforts for abusive language detection neglect one major aspect of online conversations: the community environment they take place within.…”
Section: Introductionmentioning
confidence: 99%
“…One way to ensure that machine learning frameworks are socially conscientious is to add context around conversations. Past research has explored the contextual information within conversation threads (Pavlopoulos et al, 2020;Ziems et al, 2020), user demographics (Unsvåg and Gambäck, 2018;Founta et al, 2019), user history (Saveski et al, 2021;Qian et al, 2018;Dadvar et al, 2013), user profiles (Unsvåg andGambäck, 2018;Founta et al, 2019), and user networks (Ziems et al, 2020;Mishra et al, 2018) with varying degrees of success in improving performance. However, most modelling efforts for abusive language detection neglect one major aspect of online conversations: the community environment they take place within.…”
Section: Introductionmentioning
confidence: 99%