This paper addresses the problem of Human-Aware Navigation (HAN), using multi camera sensors to implement a vision-based person tracking system. The main contributions of this paper are as follows: a novel and efficient Deep Learning person detection and a standardization of human-aware constraints. In the first stage of the approach, we propose to cascade the Aggregate Channel Features (ACF) detector with a deep Convolutional Neural Network (CNN) to achieve fast and accurate Pedestrian Detection (PD). Regarding the human awareness (that can be defined as constraints associated with the robot's motion), we use a mixture of asymmetric Gaussian functions, to define the cost functions associated to each constraint. Both methods proposed herein are evaluated individually to measure the impact of each of the components. The final solution (including both the proposed pedestrian detection and the human-aware constraints) is tested in a typical domestic indoor scenario, in four distinct experiments. The results show that the robot is able to cope with human-aware constraints, defined after common proxemics and social rules.
A real-time Deep Learning based method for Pedestrian Detection (PD) is applied to the Human-Aware robot navigation problem. The pedestrian detector combines the Aggregate Channel Features (ACF) detector with a deep Convolutional Neural Network (CNN) in order to obtain fast and accurate performance. Our solution is firstly evaluated using a set of real images taken from onboard and offboard cameras and, then, it is validated in a typical robot navigation environment with pedestrians (two distinct experiments are conducted). The results on both tests show that our pedestrian detector is robust and fast enough to be used on robot navigation applications.
A reliable estimation of 3D parameters is a must for several applications like planning and control, in which is included Image-Based Visual Servoing. This control scheme depends directly on 3D parameters, e.g. depth of points, and/or depth and direction of 3D straight lines. Recently, a framework for Active Structure-from-Motion was proposed, addressing the former feature type. However, straight lines were not addressed. These are 1D objects, which allow for more robust detection, and tracking. In this work, the problem of Active Structurefrom-Motion for 3D straight lines is addressed. An explicit representation of these features is presented, and a change of variables is proposed. The latter allows the dynamics of the line to respect the conditions for observability of the framework. A control law is used with the purpose of keeping the control effort reasonable, while achieving a desired convergence rate. The approach is validated first in simulation for a single line, and second using a real robot setup. The latter set of experiments are conducted first for a single line, and then for three lines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.