Receiving social support is critical for well-being, but concerns about a recipient’s reaction could make people reluctant to express such support. Our studies indicate that people’s expectations about how their support will be received predict their likelihood of expressing it (Study 1, N = 100 online adults), but these expectations are systematically miscalibrated. Participants who sent messages of support to others they knew (Study 2, N = 120 students) or who expressed support to a new acquaintance in person (Study 3, N = 50 adult pairs) consistently underestimated how positively their recipients would respond. A systematic perspective gap between expressers and recipients may explain miscalibrated expectations: Expressers may focus on how competent their support seems, whereas recipients may focus on the warmth it conveys (Study 4, N = 300 adults). Miscalibrated concerns about how to express support most competently may make people overly reluctant to reach out to someone in need.
Many everyday dilemmas reflect a conflict between two moral motivations: the desire to adhere to universal principles (integrity) and the desire to improve the welfare of specific individuals in need (benevolence). In this article, we bridge research on moral judgment and trust to introduce a framework that establishes three central distinctions between benevolence and integrity: (1) the degree to which they rely on impartiality, (2) the degree to which they are tied to emotion versus reason, and (3) the degree to which they can be evaluated in isolation. We use this framework to explain existing findings and generate novel predictions about the resolution and judgment of benevolenceintegrity dilemmas. Though ethical dilemmas have long been a focus of moral psychology research, recent research has relied on dramatic dilemmas that involve conflicts of utilitarianism and deontology and has failed to represent the ordinary, yet psychologically taxing dilemmas that we frequently face in everyday life. The present article fills this gap, thereby deepening our understanding of moral judgment and decision making and providing practical insights on how decision makers resolve moral conflict.
Although honesty is typically conceptualized as a virtue, it often conflicts with other equally important moral values, such as avoiding interpersonal harm. In the present research, we explore when and why honesty enables helpful versus harmful behavior. Across 5 incentive-compatible experiments in the context of advice-giving and economic games, we document four central results. First, honesty enables selfish harm: people are more likely to engage in and justify selfish behavior when selfishness is associated with honesty than when it is not. Second, people are selectively honest: people are more likely to be honest when honesty is associated with selfishness than when honesty is associated with altruism. Third, these effects are more consistent with genuine, rather than motivated, preferences for honesty. Fourth, even when individuals have no selfish incentive to be honest, honesty can lead to interpersonal harm because people avoid information about how their honest behavior affects others. This research unearths new insights on the mechanisms underlying moral choice, and consequently, the contexts in which moral principles are a force of good versus a force of evil.
One psychological barrier impeding saving behavior is the inability to fully empathize with one’s future self. Future self interventions have improved savings by helping people overcome this obstacle. Despite the promise of such interventions, previous research has focused predominantly on hypothetical contexts and western settings where the target sample has been predominantly undergraduate. Do interventions that encourage people to more concretely consider their future selves during retirement still have a positive effect on behavior in consequential, real-world savings decisions? Using a field experiment in Mexico (N = 7,603), where less than 1% make a voluntary savings contribution annually, we developed a low-cost, easy-to-implement intervention to test whether concrete thinking about one’s future life improves recurring retirement savings signups relative to a status quo, control group. We find that future self decision aids significantly improved the likelihood of signing up for an automatic recurring savings plan by nearly four times compared to the control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.