The unceasing aging of the population is leading to new problems in developed countries. Robots represent an opportunity to extend the period of independent living of the elderly as well as to ameliorate their economic burden and social problems. We present a new social robot, Mini, specifically designed to assist and accompany the elderly in their daily life either at home or in a nursing facility. Based on the results of several meetings with experts in this field, we have built a robot able to provide services in the areas of safety, entertainment, personal assistance and stimulation. Mini supports elders and caregivers in cognitive and mental tasks. We present the robot platform and describe the software architecture, particularly focussing on the human-robot interaction. We give in detail how the robot operates and the interrelation of the different modules of the robot in a real use case. In the last part of the paper, we evaluated how users perceive the robot. Participants reported interesting results in terms of usability, appearance, and satisfaction. This paper describes all aspects of the design and development of a new social robot that can be used by other researchers who face the multiple challenges of creating a new robotic platform for older people. Keywords Robots for elderly • Healthcare robotics • Human-robot interaction • HRI • Social robotics • Assistive robotics The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), funded by the Ministerio de Ciencia, Innovación y Universidades.
An important aspect in Human–Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot’s shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot’s shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot’s shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot’s shell and plug it into the robot’s computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F-score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.
A robust perception system is crucial for natural human–robot interaction. An essential capability of these systems is to provide a rich representation of the robot’s environment, typically using multiple sensory sources. Moreover, this information allows the robot to react to both external stimuli and user responses. The novel contribution of this paper is the development of a perception architecture, which was based on the bio-inspired concept of endogenous attention being integrated into a real social robot. In this paper, the architecture is defined at a theoretical level to provide insights into the underlying bio-inspired mechanisms and at a practical level to integrate and test the architecture within the complete architecture of a robot. We also defined mechanisms to establish the most salient stimulus for the detection or task in question. Furthermore, the attention-based architecture uses information from the robot’s decision-making system to produce user responses and robot decisions. Finally, this paper also presents the preliminary test results from the integration of this architecture into a real social robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.