We investigate the psychological recovery process of full-time employees during the 2-week period at the onset of the Coronavirus pandemic (COVID-19). Past research suggests that recovery processes start after stressors abate and can take months or years to unfold. In contrast, we build on autonomy restoration theory to suggest that recovery of impaired autonomy starts immediately even as a stressor is ongoing. Using growth curve modeling, we examined the temporal trajectories of two manifestations of impaired autonomy—powerlessness and (lack of) authenticity—to test whether recovery began as the pandemic unfolded. We tested our predictions using a unique experience-sampling dataset collected over a 2-week period beginning on the Monday after COVID-19 was declared a “global pandemic” by the World Health Organization and a “national emergency” by the U.S. Government (March 16–27, 2020). Results suggest that autonomy restoration was activated even as the pandemic worsened. Employees reported decreasing powerlessness and increasing authenticity during this period, despite their subjective stress-levels not improving. Further, the trajectories of recovery for both powerlessness and authenticity were steeper for employees higher (vs. lower) in neuroticism, a personality characteristic central to stress reactions. Importantly, these patterns do not emerge in a second experience-sampling study collected prior to the COVID-19 crisis (September 9–20, 2019), highlighting how the pandemic initially threatened employee autonomy, but also how employees began to recover their sense of autonomy almost immediately. The present research provides novel insights into employee well-being during the COVID-19 pandemic and suggests that psychological recovery can begin during a stressful experience.
The current research explores how local racial diversity affects Whites’ efforts to structure their local communities to avoid incidental intergroup contact. In two experimental studies (N=509; Studies 1a-b), we consider Whites’ choices to structure a fictional, diverse city and find that Whites choose greater racial segregation around more (vs. less) self-relevant landmarks (e.g., their workplace and children’s school). Specifically, the more time they expect to spend at a landmark, the more they concentrate other Whites around that landmark, thereby reducing opportunities for incidental intergroup contact. Whites also structure environments to reduce incidental intergroup contact by instituting organizational policies that disproportionately exclude non-Whites: Two large-scale archival studies (Studies 2a-b) using data from every U.S. tennis (N=15,023) and golf (N=10,949) facility revealed that facilities in more racially diverse communities maintain more exclusionary barriers (e.g., guest policies, monetary fees, dress codes) that shield the patrons of these historically White institutions from incidental intergroup contact. In a final experiment (N=307; Study 3), we find that Whites’ anticipated intergroup anxiety is one driver of their choices to structure environments to reduce incidental intergroup contact in more (vs. less) racially diverse communities. Our results suggest that despite increasing racial diversity, White Americans structure local environments to fuel a self-perpetuating cycle of segregation.
Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been raising the alarm on why we should be cautious in interpreting and using these models: they are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested and interpreted. As psychologists, we thus face a fork in the road; Down the first path, we can continue to use these models without examining and addressing these critical flaws, and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias towards this growing field, collaborating with computer scientists to mitigate the deleterious outcomes associated with these models. This paper serves to light the way down the second path by identifying how extant psychological research can help examine and mitigate bias in ML models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.