CORTEX is a cognitive robotics architecture inspired by three key ideas: modularity, internal modelling and graph representations. CORTEX is also a computational framework designed to support early forms of intelligence in real world, human interacting robots, by selecting an a priori functional decomposition of the capabilities of the robot. This set of abilities was then translated to computational modules or agents, each one built as a network of software interconnected components. The nature of these agents can range from pure reactive modules connected to sensors and/or actuators, to pure deliberative ones, but they can only communicate with each other through a graph structure called Deep State Representation (DSR). DSR is a shortterm dynamic representation of the space surrounding the robot, the objects and the humans in it, and the robot itself. All these entities are perceived and transformed into different levels of abstraction, ranging from geometric data to high-level symbolic relations such as "the person is talking and gazing at me". The combination of symbolic and geometric information endows the architecture with the potential to simulate and anticipate the outcome of the actions executed by the robot. In this paper we present recent advances in the CORTEX architecture and several real-world human-robot interaction scenarios in which they have been tested. We describe our interpretation of the ideas inspiring the architecture and the reasons why this specific computational framework is a promising architecture for the social robots of tomorrow.
This paper introduces a taxonomy of vision systems for ground mobile robots. In the last five years, a significant number of relevant papers have contributed to this subject. Firstly, a thorough review of the papers is proposed to discuss and classify both past and the most current approaches in the field. As a result, a global picture of the state of the art of the last five years is obtained. Moreover, the study of the articles is used to put forward a comprehensive taxonomy based on the most up-to-date research in ground mobile robotics. In this sense, the paper aims at being especially helpful to both budding and experienced researchers in the areas of vision systems and mobile ground robots. The taxonomy described is devised from a novel perspective, namely in order to respond to the main questions posed when designing robotic vision systems: why?, what for?, what with?, how?, and where? The answers are derived from the most relevant techniques described in the recent literature, leading in a natural way to a series of classifications that are discussed and contextualized. The article offers a global picture of the state of the art in the area and discovers some promising research lines.
Abstract. This paper presents an overview of the ImageCLEF 2013 lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the cross-language annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and botanic collections. Over the years, by providing new data collections and challenging tasks to the community of interest, the ImageCLEF lab has achieved an unique position in the multi lingual image annotation and retrieval research landscape. The 2013 edition consisted of three tasks: the photo annotation and retrieval task, the plant identification task and the robot vision task. Furthermore, the medical annotation task, that traditionally has been under the ImageCLEF umbrella and that this year celebrates its tenth anniversary, has been organized in conjunction with AMIA for the first time. The paper describes the tasks and the 2013 competition, giving an unifying perspective of the present activities of the lab while discussion the future challenges and opportunities.
In the last decade competitions proved to be a very efficient way of encouraging researchers to advance the state of the art in different research fields in artificial intelligence. In this paper we focus on the optional task of the RobotVi-sion@ImageCLEF competition, which consists of a visual place classification problem where images are not isolated pictures but a sequence of frames captured by a camera mounted on a mobile robot. This fact leads us to deal with this problem not as stand-alone classification problem, but as a problem of self localization in which the robot's main sensor only captures visual information. Thus, we base our proposal on a clever combination of Monte-Carlo-based self-localization methods with optimized versions of scale-invariant feature transformation algorithms for image representation and matching. The goodness of our approach has been validated by being the winners of this task in the 2009 RobotVision@ImageCLEF and 2010 RobotVision ImageCLEF@ICPR competitions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.