Decision making is not a unitary entity but involves rather a series of interdependent processes. Decisions entail a choice between two or more alternatives. Within the complex series of decisional processes, at least two levels can be differentiated: a first level of information integration (process level) and a second level of information interpretation (control level), leading to a subsequent motor response or cognitive process. The aim of this study was to investigate the neural network of these decisional processes. In a single trial fMRI study, we implemented a simple decision-making task, where subjects had to decide between two alternatives represented on five attributes. The similarity between the two alternatives was varied systematically in order to achieve a parametric variation of decisional effort. For easy trials, the two alternatives differed significantly in several attributes, whereas for difficult trials, the two alternatives differed only in small details. The results show a distributed neural network related to decisional effort. By means of time course analysis different subprocesses within this network could be differentiated: regions subserving the integration of the presented information (premotor areas and superior parietal lobe) and regions subserving the interpretation of this information (frontolateral and frontomedial cortex, anterior insula, and caudate) as well as a region in the inferior frontal junction updating task rules. D
In many future joint-action scenarios, humans and robots will have to interact physically in order to cooperate successfully. Ideally, human-robot interaction should not require training on the human side, but should be intuitive and simple. Previously, we reported on a simple case of physical human-robot interaction, a hand-over task [1]. Even such a basic task as manually handing over an object from one agent to another requires that both partners agree upon certain basic prerequisites and boundary conditions. While some of them are negotiated explicitly, e.g. by verbal communication, others are determined indirectly and adaptively in the course of the cooperation. In the previous study we compared a humanhuman hand-over interaction with the same task performed by a human and a robot. However, the trajectories used for the robot, a conventional trapezoidal velocity profile in joint coordinates and a minimum-jerk profile of the end-effector, have little resemblance to the natural movements of humans. In this study we introduce a novel trajectory generator that is a variation of the traditional minimum-jerk profile, the 'decoupled minimum-jerk' profile. Its trajectory is much closer to those observed in human-human experiments. We evaluated its performance concerning human comfort and acceptance in a simple hand-over experiment by using a post-test questionnaire. The evaluation of the questionnaire revealed no difference with respect to comfort, human-likeness, or subjective safety of the new planner compared to the minimum-jerk profile. Thus, the 'decoupled minimum-jerk' planner, which offers important advantages with respect to target approach, proved to be a promising alternative to the previously used minimum-jerk profile.
Cognitive-technical intelligence is envisioned to be constantly available and capable of adapting to the user's emotions. However, the question is: what specific emotions should be reliably recognised by intelligent systems? Hence, in this study, we have attempted to identify similarities and differences of emotions between human-human (HHI) and human-machine interactions (HMI). We focused on what emotions in the experienced scenarios of HMI are retroactively reflected as compared with HHI. The sample consisted of N = 145 participants, who were divided into two groups. Positive and negative scenario descriptions of HMI and HHI were given by the first and second groups, respectively. Subsequently, the participants evaluated their respective scenarios with the help of 94 adjectives relating to emotions. The correlations between the occurrences of emotions in the HMI versus HHI were very high. The results do not support the statement that only a few emotions in HMI are relevant.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.