http://bioconductor.org/packages/release/Software.html
Background β-blocker therapy and β-adrenergic receptor (β-AR) polymorphisms are associated with increases in glucose and lipid levels. We investigated associations of common β1 and β2- AR single nucleotide polymorphisms (SNPs) with metabolic and lipid variables, and examined interactions with β-blocker treatment assignment to affect these parameters. Methods This was a Post hoc analysis of a double-blinded clinical trial of non-diabetic, hypertensive individuals that were randomized to receive carvedilol or metoprolol succinate. Fasting glucose, insulin, and lipid levels were measured at baseline, 3, and after 6 months. Genotypes for β1-AR SNPs Ser49Gly & Gly389Arg and β2-AR Arg16Gly & Gln27Glu were determined. Multivariable mixed models were used to examine associations between β-AR polymorphisms, metabolic parameters, and SNP interactions with β-blocker therapy (pinteraction). Results The 322 subjects were mean (standard deviation) 51.5 (11.2) years old. After 6 months, insulin levels increased by 35.6% on metoprolol but decreased by 9.9% on carvedilol (p=0.015). In univariate models, the Gln27Gln genotype had higher overall insulin levels with β-blockade compared to the Glu27Glu genotype (p=0.006). Both Arg16Gly (p=0.012) and Gln27Glu (p=0.037) SNPs were associated with triglycerides levels. An interaction between the Arg16Gly SNP and treatment was identified (pint=0.048). Conclusions These data suggest that insulin and triglycerides may be influenced by β2-AR polymorphisms in patients taking β–blockers.
For social media services, content moderation — the policies, practices and tools providers have in place to address online abuse — is their primary tool to defend users from everything from mis- and disinformation to hate speech and online extremism. As the list of potential abuses continues to grow, online service providers have commensurately built new systems to enforce their content policies, largely through removing or reducing the visibility of potentially abusive content. But social media services don’t always inform users when they are moderating their content: a user’s comment may appear normally to themselves but be hidden to others; a user’s handle may disappear in search; a user’s original content may get ranked so low in a recommendation algorithm that, for all intents and purposes, it becomes undiscoverable. On the internet, people refer to this broad range of undisclosed content moderation actions as shadowbanning.The goal of this paper is to bridge the gap between social media companies, end users, and the broader public in how each understands shadowbanning in order to help social media companies better manage disclosures to users about content moderation. We aim to do this by critically examining three questions: (1) What is shadowbanning? (2) Who is affected by shadowbanning? (3) What larger effects does shadowbanning have?The first section of this paper defines the term shadowbanning, reviews specific shadowbanning practices in areas such as search, comments, and recommendations, and discusses the reasons social media companies give for why they engage in shadowbanning. The second section looks at which groups may be disproportionately affected by shadowbanning and describe how users diagnose and respond to their own shadowbanning. The third section explores the consequences of shadowbanning on individuals, groups, and society at large. The fourth and final section recommends three ways social media services can mitigate the harms of shadowbanning: sharply limiting the circumstances in which they shadowban, “shedding light” on shadowbanning by publishing their policies about when they shadowban and data about content or accounts subject to opaque content moderation, and creating mechanisms to allow researchers to learn more about the potential harms shadowbanning may cause.
What role has social media played in society? Did it influence the rise of Trumpism in the U.S. and the passage of Brexit in the UK? What about the way authoritarians exercise power in India or China? Has social media undermined teenage mental health? What about its role in building social and community capital, promoting economic development, and so on? To answer these and other important policy-related questions, researchers such as academics, journalists and others need access to data from social media companies. However, this data is generally not available to researchers outside of social media companies and, where it is available, it is often insufficient, meaning that we are left with incomplete answers. The problem is complex but not intractable. In this report, we look to other industries where companies share data with researchers while also addressing privacy and other concerns. In doing so, our analysis contributes to current public and corporate discussions about how to safely and effectively share social media data with researchers. We review experiences based on the governance of clinical trials, electricity smart meters, and environmental impact data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.