To disseminate research, scholars once relied on university media services or journal press releases, but today any academic can turn to Twitter to share their published work with a broader audience. The possibility that scholars can push their research out, rather than hope that it is pulled in, holds the potential for scholars to draw wide attention to their research. In this manuscript, we examine whether there are systematic differences in the types of scholars who most benefit from this push model. Specifically, we investigate the extent to which there are gender differences in the dissemination of research via Twitter. We carry out our analyses by tracking tweet patterns for articles published in six journals across two fields (political science and communication), and we pair this Twitter data with demographic and educational data about the authors of the published articles, as well as article citation rates. We find considerable evidence that, overall, article citations are positively correlated with tweets about the article, and we find little evidence to suggest that author gender affects the transmission of research in this new media. Social media provide academics with one of the most direct routes for sharing their work. To disseminate research, scholars once relied on university media services or journal press releases, but today any academic can turn to Twitter to share their published findings with a broader audience that stretches well beyond their friends, family, or even academic community. In this manuscript, we provide a broad empirical investigation of whether Twitter offers any advantage to academics who share their work via social media. We then turn to the more specific question of whether Twitter offers an equitable benefit to all academics who participate, or if instead it simply exacerbates inequalities in research dissemination that exist "offline." In considering these inequalities, we focus on the specific case of gender. Relying on social media, researchers can reach practitioners, journalists, and the public at large [1,2,3,4]. Twitter seems to offer tremendous benefits [5], especially given increasing
Organizations often employ data-driven models to inform decisions that can have a significant impact on people's lives (e.g., university admissions, hiring). In order to protect people's privacy and prevent discrimination, these decision-makers may choose to delete or avoid collecting social category data, like sex and race. In this article, we argue that such censoring can exacerbate discrimination by making biases more difficult to detect. We begin by detailing how computerized decisions can lead to biases in the absence of social category data and in some contexts, may even sustain biases that arise by random chance. We then show how proactively using social category data can help illuminate and combat discriminatory practices, using cases from education and employment that lead to strategies for detecting and preventing discrimination. We conclude that discrimination can occur in any sociotechnical system in which someone decides to use an algorithmic process to inform decision-making, and we offer a set of broader implications for researchers and policymakers.
Online discussions are performed in the gaze of fellow users. To increase engagement, platforms typically let these users evaluate the comments made by others through rating systems (e.g., via Likes or Down/Up votes). Understanding how such ratings shape, and are shaped by, features of the underlying discussion is important for our understanding of online behavior. In this study, we focus on an increasingly concerning aspect of online discussions: incivility. We draw on the theory of normative social behavior to analyze a data set of over 6,000 online newspaper comments. We find that repeated incivility by the same person is more likely when their initial incivility was affirmed by both descriptive norms (incivility in nearby comments) and injunctive norms (Up votes). Repeated incivility receives more Up votes if nearby comments also include incivility but fewer Up votes if they do not, suggesting that injunctive norms are contextual and shaped by descriptive norms. We conclude that online incivility is a dynamic, normative process that is responsive to both positive feedback and proximate incivility.
Incivility in public discourse has been a major concern in recent times as it can affect the quality and tenacity of the discourse negatively. In this paper, we present neural models that can learn to detect name-calling and vulgarity from a newspaper comment section. We show that in contrast to prior work on detecting toxic language, fine-grained incivilities like namecalling cannot be accurately detected by simple models like logistic regression. We apply the models trained on the newspaper comments data to detect uncivil comments in a Russian troll dataset, and find that despite the change of domain, the model makes accurate predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.