Humans recognize objects in a dynamically changing world integrating evidence across a vast variety of timescales. This ability is showcased by performance on rapid serial visual presentation (RSVP) tasks in which observers succeed at recognizing objects in rapid sequences of natural scenes, at up to 13 ms/image. To date, the computational mechanisms governing such dynamic object recognition remain poorly understood. Here, we developed deep learning models for dynamic recognition and compared different computational mechanisms, contrasting feedforward and recurrent, single and sequential image processing as well as different forms of adaptation. We found that only models that integrate images sequentially via lateral recurrence mirrored human performance (N=36) across different image durations (13-80 ms/image). Augmenting this model with a power-law adaptation mechanism was essential for predicting human trial-by-trial reports across image durations. These findings shed new insights into the mechanisms rendering object recognition so fast and effective in a dynamic visual world.
Intergroup dynamics shape the ways in which we interact with other people. We feel more empathy towards ingroup members compared to outgroup members, and can even feel pleasure when an outgroup member experiences misfortune, known as schadenfreude. Here, we test the extent to which these intergroup biases emerge during interactions with robots. We measured trial-by-trial fluctuations in emotional reactivity to the outcome of a competitive reaction time game to assess both empathy and schadenfreude in arbitrary human-human and human-robot teams. Across four experiments (total n = 361), we observed a consistent empathy and schadenfreude bias driven by team membership. People felt more empathy towards ingroup members than outgroup members and more schadenfreude towards outgroup members. The existence of an intergroup bias did not depend on the nature of the agent: the same effects were observed for human-human and human-robot teams. People reported similar levels of empathy and schadenfreude towards a human and robot player. The human likeness of the robot did not consistently influence this intergroup bias. In other words, similar empathy and schadenfreude biases were observed for both humanoid and mechanoid robots. For all teams, this bias was influenced by the level of team identification; individuals who identified more with their team showed stronger intergroup empathy and schadenfreude bias. Together, we show that similar intergroup dynamics that shape our interactions with people can also shape interactions with robots. Our results highlight the importance of taking intergroup biases into account when examining social dynamics of human-robot interactions.
Humans can quickly recognize objects in a dynamically changing world. This ability is showcased by the fact that observers succeed at recognizing objects in rapidly changing image sequences, at up to 13 ms/image. To date, the mechanisms that govern dynamic object recognition remain poorly understood. Here, we developed deep learning models for dynamic recognition and compared different computational mechanisms, contrasting feedforward and recurrent, single-image and sequential processing as well as different forms of adaptation. We found that only models that integrate images sequentially via lateral recurrence mirrored human performance (N = 36) and were predictive of trial-by-trial responses across image durations (13–80 ms/image). Importantly, models with sequential lateral-recurrent integration also captured how human performance changes as a function of image presentation durations, with models processing images for a few time steps capturing human object recognition at shorter presentation durations and models processing images for more time steps capturing human object recognition at longer presentation durations. Furthermore, augmenting such a recurrent model with adaptation markedly improved dynamic recognition performance and accelerated its representational dynamics, thereby predicting human trial-by-trial responses using fewer processing resources. Together, these findings provide new insights into the mechanisms rendering object recognition so fast and effective in a dynamic visual world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.