F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations.
Estimating distances between people and robots plays a crucial role in understanding social Human–Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human–robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.
In this paper, we present a model to propose an optimal placement for a robot in a social group interaction. Our model estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placement of the robot within a group of people. An evaluation of the method has been performed by conducting an experiment where participants stand in different formations and a robot is teleoperated to join the group. In one set of experiments, the operator positions the robot according to the specified location given by our algorithm. In another set of experiments, operators have the freedom to position the robot according to their personal choice. Follow-up questionnaires were performed to determine which of the placements were preferred by the participants. Our results indicate that the proposed method for automatic placement of the robot is supported from the view of the participants. The contribution of this work resides in a novel method to automatically estimate the best placement of the robot, as well as the results from user experiments to verify the quality of this method. These results suggest that teleoperated robots e.g. mobile robot telepresence systems could benefit from tools that assist operators in placing the robot in groups in a socially accepted manner.
The ability of a robot to detect and join groups of people is of increasing importance in social contexts, and for the collaboration between teams of humans and robots. In this paper, we propose a framework, autonomous group interactions for robots (AGIR), that endows a robot with the ability to detect such groups while following the principles of F-formations. Using on-board sensors, this method accounts for a wide spectrum of different robot systems, ranging from autonomous service robots to telepresence robots. The presented framework detects individuals, estimates their position and orientation, detects groups, determines their F-formations, and is able to suggest a position for the robot to enter the social group. For evaluation, two simulation scenes were developed based on the standard real-world datasets. The 1st scene is built with 20 virtual agents (VAs) interacting in 7 different groups of varying sizes and 3 different formations. The 2nd scene is built with 36 VAs, positioned in 13 different groups of varying sizes and 6 different formations. A model of a Pepper robot is used in both simulated scenes in randomly generated different positions. The ability for the robot to estimate orientation, detect groups, and estimate F-formations at various locations is used to determine the validation of the approaches. The obtained results show a high accuracy within each of the simulated scenarios and demonstrates that the framework is able to work from an egocentric view with a robot in real time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.