Purpose
This paper aims to present an object detection methodology to categorize 3D object models in an efficient manner. The authors propose a dynamically generated hierarchical architecture to compute very fast objects’ 3D pose for mobile service robots to grasp them.
Design/methodology/approach
The methodology used in this study is based on a dynamic pyramid search and fast template representation, metadata and context-free grammars. In the experiments, the authors use an omnidirectional KUKA mobile manipulator equipped with an RGBD camera, to localize objects requested by humans. The proposed architecture is based on efficient object detection and visual servoing. In the experiments, the robot successfully finds 3D poses. The present proposal is not restricted to specific robots or objects and can grow as much as needed.
Findings
The authors present the dynamic categorization using context-free grammars and 3D object detection, and through several experiments, the authors perform a proof of concept. The authors obtained promising results, showing that their methods can scale to more complex scenes and they can be used in future applications in real-world scenarios where mobile robot are needed in areas such as service robots or industry in general.
Research limitations/implications
The experiments were carried out using a mobile KUKA youBot. Scalability and more robust algorithms will improve the present proposal. In the first stage, the authors carried out an experimental validation.
Practical implications
The current proposal describes a scalable architecture, where more agents can be added or reprogrammed to handle more complicated tasks.
Originality/value
The main contribution of this study resides in the dynamic categorization scheme for fast detection of 3D objects, and the issues and experiments carried out to test the viability of the methods. Usually, state-of-the-art treats categories as rigid and make static queries to datasets. In the present approach, there are no fixed categories and they are created and combined on the fly to speed up detection.
Industrial robots have mainly been programmed by operators using teach pendants in a pointto-point scheme with limited sensing capabilities. New developments in robotics have attracted a lot of attention to robot motor skill learning via human interaction using Learning from Demonstration (LfD) techniques. Robot skill acquisition using LfD techniques is characterised by a high-level stage in charge of learning connected actions and a low-level stage concerned with motor coordination and reproduction of an observed path. In this paper, we present an approach to acquire a path-following skill by a robot in the low-level stage which deals with the correspondence of mapping links and joints from a human operator to a robot so that the robot can actually follow a path. We present the design of an Inertial Measurement Unit (IMU) device that is primarily used as an input to acquire the robot skill. The approach is validated using a motion capture system as ground truth to assess the spatial deviation from the human-taught path to the robot's final trajectory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.