Abstract-One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.
Human detection and tracking are essential aspects to be considered in service robotics, as the robot often shares its workspace and interacts closely with humans. This paper presents an online learning framework for human classification in 3D LiDAR scans, taking advantage of robust multi-target tracking to avoid the need for data annotation by a human expert. The system learns iteratively by retraining a classifier online with the samples collected by the robot over time. A novel aspect of our approach is that errors in training data can be corrected using the information provided by the 3D LiDAR-based tracking. In order to do this, an efficient 3D cluster detector of potential human targets has been implemented. We evaluate the framework using a new 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments analyse the real-time performance of the cluster detector and show that our online learned human classifier matches and in some cases outperforms its offline version.
2 Mozos et al.Stress remains a significant social problem for individuals in modern societies. This paper presents a machine learning approach for the automatic detection of stress of people in a social situation by combining two sensor systems that capture physiological and social responses. We compare the performance using different classifiers including support vector machine, AdaBoost, and k-nearest neighbour. Our experimental results show that by combining the measurements from both sensor systems, we could accurately discriminate between stressful and neutral situations during a controlled Trier social stress test (TSST). Moreover, this paper assesses the discriminative ability of each sensor modality individually and considers their suitability for real time stress detection. Finally, we present an study of the most discriminative features for stress detection.
Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project's main goal is to enrich the day-today experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.
This paper presents a system for online learning of human classifiers by mobile service robots using 3D Li-DAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of "experts" to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.