For mobile robots to perform certain tasks in human environments, fast and accurate object classification is essential. Actively exploring objects by changing viewpoints promises an increase in the accuracy of object classification. This paper presents an efficient feature-based active vision system for the recognition and verification of objects that are occluded, appear in cluttered scenes and may be visually similar to other objects present. This system is designed using a selector-observer framework where the selector is responsible for the automatic selection of the next best viewpoint and a Bayesian 'observer' updates the belief hypothesis and provides feedback. A new method for automatically selecting the 'next best viewpoint' is presented using vocabulary trees. It is used to calculate a weighting for each feature based on its perceived uniqueness, allowing the system to select the viewpoint with the greatest number of 'unique' features. The process is sped-up as new images are only captured at the 'next best viewpoint' and processed when the belief hypothesis of an object is below some pre-defined threshold. The system also provides a certainty measure for the objects identity. This system out performs randomly selecting a viewpoint as it processes far fewer viewpoints to recognise and verify objects in a scene.
Abstract-We propose a more robust robot programming by demonstration system planner that produces a reproduction path which satisfies statistical constraints derived from demonstration trajectories and avoids obstacles given the freedom in those constraints. To determine the statistical constraints a Gaussian Mixture Model is fitted to demonstration trajectories. These demonstrations are recorded through kinesthetic teaching of a redundant manipulator. The GMM models the likelihood of configurations given time. The planner is based on Rapidly-exploring Random Tree search with the search tree kept within the statistical model. Collision avoidance is included by not allowing the tree to grow into obstacles. The system is designed to act as a backup for if a faster reactive planner falls within a local minima.To illustrate its performance an experiment is conducted where the system is taught to open a Pelican case using a Barrett Whole Arm Manipulator (WAM). During reproduction an obstacle is placed nearby the case to partially obstruct the manipulator. The planner successfully avoided this obstacle without drifting from the trends in the demonstrations.
One of the most important and challenging tasks for mobile robots that navigate autonomously is localisation -the process whereby a robot locates itself within a map of a known environment or with respect to a known starting point within an unknown environment. Localisation of a robot in an unknown environment is done by tracking the trajectory of the robot on the basis of the initial pose. Trajectory estimation becomes a challenge if the robot is operating in an unknown environment that has a scarcity of landmarks, is GPS-denied, has very little or no illumination, and is slippery -such as in underground mines. This paper attempts to solve the problem of estimating a robot's trajectory in underground mining environments using a time-of-flight (ToF) camera and an inertial measurement unit (IMU). In the past, this problem has been addressed by using a 3D laser scanner; but these are expensive and consume a lot of power, even though they have high measurement accuracy and a wide field of view. Here, trajectory estimation is accomplished by the fusion of ego-motion provided by the ToF camera with measurement data provided by a low cost IMU. The fusion is performed using the Kalman filter algorithm on a mobile robot moving on a 2D planar surface. The results show a significant improvement on the trajectory estimation. A Vicon system is used to provide groundtruth for the trajectory estimation. Trajectory estimation only using the ToF camera is prone to errors, especially when the robot is rotating; but the fused trajectory estimation algorithm is able to estimate accurate ego-motion even when the robot is rotating. OPSOMMINGEen van die belangrikste en uitdagendste take vir mobiele robotte om selfstandig te kan navigeer is lokalisering. Lokalisering van 'n robot in 'n onbekende omgewing word gedoen deur die volg van die trajek van die robot (die aanvanklike posisie moet bekend wees). Trajekskatting raak uitdagend as die robot moet funksioneer in 'n onbekende omgewing met 'n tekort aan landmerke, geen GPS opvangs, baie swak of geen verligting en 'n gladde oppervlak -soos in ondergrondse myne. Hierdie artikel poog om die probleem van die skatting van 'n robot se trajek in ondergrondse mynbou omgewing met 'n tyd-van-vlug kamera en traagheid meet eenheid op te los. In die verlede is hierdie probleem aangespreek deur die gebruik van 'n 3D laserskandeerder. 3D laserskandeerders is duur en gebruik baie krag, al is hulle baie akkuraat met 'n wye veld van sig. In hierdie artikel is trajek skatting gedoen deur die samesmelting van die ego-beweging, gekry van die TVV kamera, en die meting data voorsien deur 'n goedkoop TME. Die samesmelting is uitgevoer deur gebruik te maak van die Kalmanfilter algoritme op 'n mobiele robot wat in 'n 2D plat vlak beweeg. Die resultate toon 'n verbetering op die trajekskatting. 'n Vicon stelsel word gebruik om die begin posisie te verskaf vir die trajekskatting. Trajekskatting slegs met die behulp van die TVV kamera is geneig tot foute, veral wanneer die robot draai. Die trajekskatting a...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.