Thanks to the efforts of the robotics and autonomous systems community,
robots are becoming ever more capable. There is also an increasing demand from
end-users for autonomous service robots that can operate in real environments
for extended periods. In the STRANDS project we are tackling this demand
head-on by integrating state-of-the-art artificial intelligence and robotics
research into mobile service robots, and deploying these systems for long-term
installations in security and care environments. Over four deployments, our
robots have been operational for a combined duration of 104 days autonomously
performing end-user defined tasks, covering 116km in the process. In this
article we describe the approach we have used to enable long-term autonomous
operation in everyday environments, and how our robots are able to use their
long run times to improve their own performance
Current approaches to visual object class detection mainly focus on the recognition of abstract object categories, such as cars, motorbikes, mugs and bottles. Although these approaches have demonstrated impressive performance in terms of recognition, their restriction to abstract categories seems artificial and inadequate in the context of embodied, cognitive agents. Here, distinguishing objects according to functional aspects based on object affordances is vital for a meaningful human-machine interaction. In this paper, we propose a complete system for the detection of functional object classes, based on a representation of visually distinct hints on object affordances (affordance cues). It spans the complete cycle from tutor-driven acquisition of affordance cues, one-shot learning of corresponding object models, and detecting novel instances of functional object classes in real images
HighlightsSegmentation of unknown objects in cluttered scenes.Abstraction of raw RGB-D data into parametric surface patches.Learning of perceptual grouping between surfaces with SVMs.Global decision making for segmentation using Grahp-Cut.
Abstract-In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.