In the rapid pursuit of automation, it is sometimes overlooked that an elaborate human-machine interplay is still necessary, despite the fact that a fully automated system, by definition, would not require a human interface.In the future, real -time sensing, intelligent processing, and dextrous manipulation will become more viable, but until then it is necessary to use humans for many critical processes. It is not obvious, however, how automated subsystems could account for human intervention, especially if a philosophy of "pure" automation dominates the design. Teleoperation, by contrast, emphasizes the creation of hardware pathways (e.g., hand -controllers, exoskeletons) to quickly communicate low -level control data to various mechanisms, while providing sensory feedback in a format suitable for human consumption (e.g., stereo displays, force reflection), leaving the "intelligence" to the human. These differences in design strategy, both hardware and software, make it difficult to tie automation and teleoperation together, while allowing for graceful transitions at the appropriate times. In no area of artifical intelligence is this problem more evident than in computer vision.Teleoperation typically uses video displays (monochrome /color, monoscopic/ stereo) with contrast enhancement and gain control without any digital processing of the images. However, increases in system performance such as automatic collision avoidance, path finding, and object recognition depend on computer vision techniques. Basically, computer vision relies on the digital processing of the images to extract low -level primitives such as boundaries and regions that are used in higher -level processes for object recognition and positions. Real -time processing of complex environments is currently unattainable, but there are many aspects of the processing that are useful for situation assessment, provided it is understood the human can assist in the more time -consuming steps. This paper maps out the connections between computer vision and teleoperation, pointing to a new phase in the ongoing research in "supervised" or "semiautomatic" systems.1.
A novel social interaction is a dynamic process, in which participants adapt to, react to and engage with their social partners. To facilitate such interactions, people gather information relating to the social context and structure of the situation. The current study aimed to deepen the understanding of the psychological determinants of behavior in a novel social interaction. Three social robots and the participant interacted non-verbally according to a pre-programmed “relationship matrix” that dictated who favored whom. Participants' gaze was tracked during the interaction and, using Bayesian inference models, resulted in a measure of participants' social information-gathering behaviors. Our results reveal the dynamics in a novel environment, wherein information-gathering behavior is initially predicted by psychological inflexibility and then, toward the end of the interaction, predicted by curiosity. These results highlight the utility of using social robots in behavioral experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.