On May 25, 2020, George Floyd, an unarmed Black American male, was killed by a White police officer. Footage of the murder was widely shared. We examined the psychological impact of Floyd’s death using two population surveys that collected data before and after his death; one from Gallup (117,568 responses from n = 47,355) and one from the US Census (409,652 responses from n = 319,471). According to the Gallup data, in the week following Floyd’s death, anger and sadness increased to unprecedented levels in the US population. During this period, more than a third of the US population reported these emotions. These increases were more pronounced for Black Americans, nearly half of whom reported these emotions. According to the US Census Household Pulse data, in the week following Floyd’s death, depression and anxiety severity increased among Black Americans at significantly higher rates than that of White Americans. Our estimates suggest that this increase corresponds to an additional 900,000 Black Americans who would have screened positive for depression, associated with a burden of roughly 2.7 million to 6.3 million mentally unhealthy days.
The speed at which social media is propagating COVID-19 related misinformation and its potential reach and impact is growing, yet little work has focused on the potential applications of these data for informing public health communication about COVID-19 vaccines. We used Twitter to access a random sample of over 78 million vaccine related tweets posted between December 1, 2020 and February 28, 2021 to describe the geographical and temporal variation in COVID-19 vaccine discourse. Urban suburbs posted about equitable distribution in communities, college towns talked about in-clinic vaccinations near universities, evangelical hubs posted about operation warp speed, thanking God, exurbs posted about the 2020 election, Hispanic centers posted about concerns around food and water, and counties in the ACP African American South posted about issues of trust, hesitancy, and history. The graying America ACP community posted about the federal government’s failures; rural middle American counties posted about news press conferences. Topics related to allergic and adverse reactions, misinformation around Bill Gates and China, and issues of trust among Black Americans in the healthcare system were more prevalent in December, topics related to questions about mask wearing, reaching herd immunity and natural infection, and concerns about nursing home residents and workers increased in January, and themes around access to black communities, waiting for appointments, keeping family safe by vaccinating and fighting online misinformation campaigns were more prevalent in February. Twitter discourse around COVID-19 vaccines in the United States varied significantly across different communities and changed over time; these insights could inform targeted messaging and mitigation strategies.
Objective Language patterns may elucidate mechanisms of mental health conditions. To inform underlying theory and risk models, we evaluated prospective associations between in vivo text messaging language and differential symptoms of depression, generalized anxiety, and social anxiety. Methods Over 16 weeks, we collected outgoing text messages from 335 adults. Using Linguistic Inquiry and Word Count (LIWC), NRC Emotion Lexicon, and previously established depression and stress dictionaries, we evaluated the degree to which language features predict symptoms of depression, generalized anxiety, or social anxiety the following week using hierarchical linear models. To isolate the specificity of language effects, we also controlled for the effects of the two other symptom types. Results We found significant relationships of language features, including personal pronouns, negative emotion, cognitive and biological processes, and informal language, with common mental health conditions, including depression, generalized anxiety, and social anxiety (ps < .05). There was substantial overlap between language features and the three mental health outcomes. However, after controlling for other symptoms in the models, depressive symptoms were uniquely negatively associated with language about anticipation, trust, social processes, and affiliation (βs: −.10 to −.09, ps < .05), whereas generalized anxiety symptoms were positively linked with these same language features (βs: .12–.13, ps < .001). Social anxiety symptoms were uniquely associated with anger, sexual language, and swearing (βs: .12–.13, ps < .05). Conclusion Language that confers both common (e.g., personal pronouns and negative emotion) and specific (e.g., affiliation, anticipation, trust, and anger) risk for affective disorders is perceptible in prior week text messages, holding promise for understanding cognitive‐behavioral mechanisms and tailoring digital interventions.
In information retrieval evaluation, when presented with an effectiveness difference between two systems, there are three relevant questions one might ask. First, are the differences statistically significant? Second, is the comparison stable with respect to assessor differences? Finally, is the difference actually meaningful to a user? This paper tackles the last two questions about assessor differences and user preferences in the context of the newly-introduced tweet timeline generation task in the TREC 2014 Microblog track, where the system's goal is to construct an informative summary of non-redundant tweets that addresses the user's information need. Central to the evaluation methodology is humangenerated semantic clusters of tweets that contain substantively similar information. We show that the evaluation is stable with respect to assessor differences in clustering and that user preferences generally correlate with effectiveness metrics even though users are not explicitly aware of the semantic clustering being performed by the systems. Although our analyses are limited to this particular task, we believe that lessons learned could generalize to other evaluations based on establishing semantic equivalence between information units, such as nugget-based evaluations in question answering and temporal summarization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.