How does rumination affect reinforcement learning—the ubiquitous process by which people adjust behavior after error to behave more effectively in the future? In a within-subjects design ( N = 49), we tested whether experimentally manipulated rumination disrupts reinforcement learning in a multidimensional learning task previously shown to rely on selective attention. Rumination impaired performance, yet unexpectedly, this impairment could not be attributed to decreased attentional breadth (quantified using a decay parameter in a computational model). Instead, trait rumination (between subjects) was associated with higher decay rates (implying narrower attention) but not with impaired performance. Our task-performance results accord with the possibility that state rumination promotes stress-generating behavior in part by disrupting reinforcement learning. The trait-rumination finding accords with the predictions of a prominent model of trait rumination (the attentional-scope model). More work is needed to understand the specific mechanisms by which state rumination disrupts reinforcement learning.
Much work has been done to engineer robots’ mechanical capabilities to best suit the general demands of their users and tasks. However, minimal research has addressed the impact of individual differences on perceptions of robot trustworthiness. These conclusions can provide guidance to optimize adaptive robotic systems in education, healthcare, and industry settings. This study examined the relationship between personality and human robot interaction in two contexts: (1) error-free and (2) errors. Assessment of individual differences were achieved via the Interpersonal Reactivity Index (IRI) (Davis, 1980) and robot trust assessed using the Multidimensional Measure of Trust (MDMT) (Ullman & Malle, 2018). This project provided a novel contribution in the field of human-robot interaction, highlighting the influence of technological failure on trust impressions of a social robot. Additionally, we sought to understand the degree to which empathy levels mediate these changes in trust.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.