This paper integrated Gross cognitive process into the HMM (hidden Markov model) emotional regulation method and implemented human-robot emotional interaction with facial expressions and behaviors. Here, energy was the psychological driving force of emotional transition in the cognitive emotional model. The input facial expression was translated into external energy by expression-emotion mapping. Robot’s next emotional state was determined by the cognitive energy (the stimulus after cognition) and its own current emotional energy’s size and source’s position. The two random quantities in emotional transition process—the emotional family and the specific emotional state in the AVS (arousal-valence-stance) 3D space—were used to simulate human emotion selection. The model had been verified by an emotional robot with 10 degrees of freedom and more than 100 kinds of facial expressions. Experimental results show that the emotional regulation model does not simply provide the typical classification and jump in terms of a set of emotional labels but that it operates in a 3D emotional space enabling a wide range of intermediary emotional states to be obtained. So the robot with cognitive emotional regulation model is more intelligent and real; moreover it can give full play to its emotional diversification in the interaction.
Belief bias is the tendency to accept conclusions that are compatible with existing beliefs more frequently than those that contradict beliefs. It is one of the most replicated behavioral findings in the reasoning literature. Recently, neuroimaging studies using functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) have provided a new perspective and have demonstrated neural correlates of belief bias that have been viewed as supportive of dual-process theories of belief bias. However, fMRI studies have tended to focus on conclusion processing, while ERPs studies have been concerned with the processing of premises. In the present research, the electrophysiological correlates of cognitive control were studied among 12 subjects using high-density ERPs. The analysis was focused on the conclusion presentation phase and was limited to normatively sanctioned responses to valid-believable and valid-unbelievable problems. Results showed that when participants gave normatively sanctioned responses to problems where belief and logic conflicted, a more positive ERP deflection was elicited than for normatively sanctioned responses to nonconflict problems. This was observed from -400 to -200 ms prior to the correct response being given. The positive component is argued to be analogous to the late positive component (LPC) involved in cognitive control processes. This is consistent with the inhibition of empirically anomalous information when conclusions are unbelievable. These data are important in elucidating the neural correlates of belief bias by providing evidence for electrophysiological correlates of conflict resolution during conclusion processing. Moreover, they are supportive of dual-process theories of belief bias that propose conflict detection and resolution processes as central to the explanation of belief bias.
This paper discusses the transference process of emotional states driven by psychological energy in the active field state space and also builds a robot expression model based on the Hidden Marov Models(HMM). Facial expressions and behaviours are two important channels for human‐robot interaction. Robot performance based on a static emotional state cannot vividly display dynamic and complex emotional transference. Building a real‐time emotional interactive model is a critical part of robot expression. First, the attenuating emotional state space is defined and the state transfer probability is acquired. Secondly, the emotional expression model based on the HMM is proposed and the performance transference probability is calculated. Finally, this model is verified by a 15 degrees of freedom robot platform. Moreover, the interactive effects are analysed by a statistical algorithm. The experimental results demonstrate that the emotional expression model can acquire an expressive performance and remove the mechanized interactive mode when compared to traditional algorithms
Purpose The aim of this paper is to propose a grasping method based on intelligent perception for implementing a grasp task with human conduct. Design/methodology/approach First, the authors leverage Kinect to collect the environment information including both image and voice. The target object is located and segmented by gesture recognition and speech analysis and finally grasped through path teaching. To obtain the posture of the human gesture accurately, the authors use the Kalman filtering (KF) algorithm to calibrate the posture use the Gaussian mixture model (GMM) for human motion modeling, and then use Gaussian mixed regression (GMR) to predict human motion posture. Findings In the point-cloud information, many of which are useless, the authors combined human’s gesture to remove irrelevant objects in the environment as much as possible, which can help to reduce the computation while dividing and recognizing objects; at the same time to reduce the computation, the authors used the sampling algorithm based on the voxel grid. Originality/value The authors used the down-sampling algorithm, kd-tree algorithm and viewpoint feature histogram algorithm to remove the impact of unrelated objects and to get a better grasp of the state.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.