Abstract:As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains.
In this article, we present an emotion and memory model for a social robot. The model allowed the robot to create a memory account of a child’s emotional events over four individual sessions. The robot then adapted its behaviour based on the developed memory. The model was applied on the NAO robot to teach vocabulary to children while playing the popular game ‘Snakes and Ladders’. We conducted an exploratory evaluation of our model with 24 children at a primary school for 2 weeks to verify its impact on children’s long-term social engagement and overall vocabulary learning. Our preliminary results showed that the behaviour generated based on our model was able to sustain social engagement. In addition, it also helped children to improve their vocabulary. We also evaluated the impact of the positive, negative and neutral emotional feedback of the NAO robot on children’s vocabulary learning. Three groups of children (eight per group) interacted with the robot on four separate occasions over a period of 2 weeks. Our results showed that the condition where the robot displayed positive emotional feedback had a significantly positive effect on the child’s vocabulary learning performance as compared to the two other conditions: negative feedback and neutral feedback.
Cognitive load has been widely studied to help understand human performance. It is desirable to monitor user cognitive load in applications such as automation, robotics, and aerospace to achieve operational safety and to improve user experience. This can allow efficient workload management and can help to avoid or to reduce human error. However, tracking cognitive load in real time with high accuracy remains a challenge. Hence, we propose a framework to detect cognitive load by non-intrusively measuring physiological data from the eyes and heart. We exemplify and evaluate the framework where participants engage in a task that induces different levels of cognitive load. The framework uses a set of classifiers to accurately predict low, medium and high levels of cognitive load. The classifiers achieve high predictive accuracy. In particular, Random Forest and Naive Bayes performed best with accuracies of 91.66% and 85.83% respectively. Furthermore, we found that, while mean pupil diameter change for both right and left eye were the most prominent features, blinking rate also made a moderately important contribution to this highly accurate prediction of low, medium and high cognitive load. The existing results on accuracy considerably outperform prior approaches and demonstrate the applicability of our framework to detect cognitive load.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.