Effectively coordinating one’s behavior with those of others is essential for successful multiagent activity. Understanding the dynamical principles that underlie such coordination has received increased attention in recent years due to a growing interest in behavioral synchrony and complex systems phenomena. Here we examined the behavioral dynamics of a novel, multiagent shepherding task, in which pairs of individuals had to contain small herds of virtual sheep to the center of a virtual game field. Initially, all pairs adopted a complementary, search and recover mode of behavioral coordination. Over the course of game play, however, a significant number of pairs spontaneously discovered a more effective coupled oscillatory containment mode of behavior. Analysis and modeling revealed that both behavioral modes were defined by the task’s underlying dynamics and, moreover, reflected context specific realizations of the lawful dynamics that define functional shepherding behavior more generally.
Multiagent activity is commonplace in everyday life and can improve the behavioral efficiency of task performance and learning. Thus, augmenting social contexts with the use of interactive virtual and robotic agents is of great interest across health, sport, and industry domains. However, the effectiveness of human–machine interaction (HMI) to effectively train humans for future social encounters depends on the ability of artificial agents to respond to human coactors in a natural, human-like manner. One way to achieve effective HMI is by developing dynamical models utilizing dynamical motor primitives (DMPs) of human multiagent coordination that not only capture the behavioral dynamics of successful human performance but also, provide a tractable control architecture for computerized agents. Previous research has demonstrated how DMPs can successfully capture human-like dynamics of simple nonsocial, single-actor movements. However, it is unclear whether DMPs can be used to model more complex multiagent task scenarios. This study tested this human-centered approach to HMI using a complex dyadic shepherding task, in which pairs of coacting agents had to work together to corral and contain small herds of virtual sheep. Human–human and human–artificial agent dyads were tested across two different task contexts. The results revealed (i) that the performance of human–human dyads was equivalent to those composed of a human and the artificial agent and (ii) that, using a “Turing-like” methodology, most participants in the HMI condition were unaware that they were working alongside an artificial agent, further validating the isomorphism of human and artificial agent behavior.
Assessment of deficits in oculomotor function may be useful to detect visuomotor impairments due to a closed head injury. Systematic analysis schemes are needed to reliably quantify oculomotor deficits associated with oculomotor impairment via brain trauma. We propose a systematic, automated analysis scheme using various eye-tracking tasks to assess oculomotor function in a cohort of adolescents with acute concussion symptoms and aged-matched healthy controls. From these data we have evidence that these methods reliably detect oculomotor deficits in the concussed group, including reduced spatial accuracy and diminished tracking performance during visually guided prosaccade and self-paced saccade tasks. The accuracy and tracking deficits are consistent with prior studies on oculomotor function, while introducing novel discriminatory measures relative to fixation assessments -methodologically, a less complicated measure of performance -and thus represent a reliable and simple scheme of detection and analysis of oculomotor deficits associated with brain injury.
The coordination of attention between individuals is a fundamental part of everyday human social interaction. Previous work has focused on the role of gaze information for guiding responses during joint attention episodes. However, in many contexts, hand gestures such as pointing provide another valuable source of information about the locus of attention. The current study developed a novel virtual reality paradigm to investigate the extent to which initiator gaze information is used by responders to guide joint attention responses in the presence of more visually salient and spatially precise pointing gestures. Dyads were instructed to use pointing gestures to complete a cooperative joint attention task in a virtual environment. Eye and hand tracking enabled real-time interaction and provided objective measures of gaze and pointing behaviours. Initiators displayed gaze behaviours that were spatially congruent with the subsequent pointing gestures. Responders overtly attended to the initiator’s gaze during the joint attention episode. However, both these initiator and responder behaviours were highly variable across individuals. Critically, when responders did overtly attend to their partner’s face, their saccadic reaction times were faster when the initiator’s gaze was also congruent with the pointing gesture, and thus predictive of the joint attention location. These results indicate that humans attend to and process gaze information to facilitate joint attention responsivity, even in contexts where gaze information is implicit to the task and joint attention is explicitly cued by more spatially precise and visually salient pointing gestures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.