In the past decade, social networking sites have become central forums for public discourse and political engagement. Of particular interest is the role that Twitter plays in the facilitation of political discourse. To this end, the existing literature argues that a healthy political discussion space is key to maintaining a trusting and robust democratic society. Using Suler’s online disinhibition effect as a theoretical orientation, this study seeks to address the extent of incivility on Twitter in discourse regarding the top three 2020 Democratic primary candidates. A total corpus of 18,237,296 tweets was analyzed in an effort to assess the extent to which incivility dominated Twitter discourse surrounding these candidates. Our results reveal that tweets that mention Senator Elizabeth Warren were associated with higher levels of uncivil discourse than tweets that mentioned Senator Bernie Sanders and former Vice President Joe Biden. Interestingly, there does not appear to be a relationship with anonymity and incivility, as uncivil tweets were just as likely to originate from tweets that identified users’ names as they were to originate from anonymous or pseudonymous accounts. Finally, our findings provide evidence that certain policy issues are more closely related to uncivil discourse than others. Through the use of k-means clustering, our findings illustrate that the issue of gun control and immigration is closely related with mentions of Warren and fiscal policy with Sanders; however, we did not find any policy keywords linked to Biden.
Large language models trained on a mixture of NLP tasks that are converted into a textto-text format using prompts, can generalize into novel forms of language and handle novel tasks. A large body of work within prompt engineering attempts to understand the effects of input forms and prompts in achieving superior performance. We consider an alternative measure and inquire whether the way in which an input is encoded affects social biases promoted in outputs. In this paper, we study T0, a large-scale multi-task text-to-text language model trained using prompt-based learning. We consider two different forms of semantically equivalent inputs: question-answer format and premise-hypothesis format. We use an existing bias benchmark for the former BBQ (Parrish et al., 2021) and create the first bias benchmark in natural language inference BBNLI with hand-written hypotheses while also converting each benchmark into the other form. The results on two benchmarks suggest that given two different formulations of essentially the same input, T0 conspicuously acts more biased in question answering form, which is seen during training, compared to premisehypothesis form which is unlike its training examples. Code and data are released under https://github.com/feyzaakyurek/bbnli. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.