Robotics is an area of research in which the paradigm of Multi-Agent Systems (MAS) can prove to be highly useful. Multi-Agent Systems come in the form of cooperative robots in a team, sensor networks based on mobile robots, and robots in Intelligent Environments, to name but a few. However, the development of Multi-Agent Robotic Systems (MARS) still presents major challenges. Over the past decade, a high number of Robotics Software Frameworks (RSFs) have appeared which propose some solutions to the most recurrent problems in robotics. Some of these frameworks, such as ROS, YARP, OROCOS, ORCA, Open-RTM, and Open-RDK, possess certain characteristics and provide the basic infrastructure necessary for the development of MARS. The contribution of this work is the identification of such characteristics as well as the analysis of these frameworks in comparison with the general-purpose Multi-Agent System Frameworks (MASFs), such as JADE and Mobile-C.
The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.Note to Practitioners-Cloud computing for robotics is very promising for several reasons, like robot's energy saving, larger storage capacity, stable electric power, better resource utilization and the difficulty of upgrading the robots' embedded hardware. The presented application extracts 3D point clouds from the stereo image pairs of a camera situated on the robot. Using these 3D points, a shared control is implemented to help the remote teleop-eration of a robot. That is, the commands sent by a joystick are attenuated when a possible collision is detected (by checking the future commanded trajectory against the 3D points). All of these computationally heavy tasks (difficult to perform by a mobile robot) are done in the cloud. The offloading models proposed in this paper are generic enough to be used in other applications. Obtained results show that further improvement in communica-tion technologies will suppose a significant performance boost for offloading computation.
The tracking problem (that is, how to follow a previously memorized path) is one of the most important problems in mobile robots. Several methods can be formulated depending on the way the robot state is related to the path. “Trajectory tracking” is the most common method, with the controller aiming to move the robot toward a moving target point, like in a real-time servosystem. In the case of complex systems or systems under perturbations or unmodeled effects, such as UAVs (Unmanned Aerial Vehicles), other tracking methods can offer additional benefits. In this paper, methods that consider the dynamics of the path’s descriptor parameter (which can be called “error adaptive tracking”) are contrasted with trajectory tracking. A formal description of tracking methods is first presented, showing that two types of error adaptive tracking can be used with the same controller in any system. Then, it is shown that the selection of an appropriate tracking rate improves error convergence and robustness for a UAV system, which is illustrated by simulation experiments. It is concluded that error adaptive tracking methods outperform trajectory tracking ones, producing a faster and more robust convergence tracking, while preserving, if required, the same tracking rate when convergence is achieved.
Thanks to the advent of technologies like Cloud Computing, the idea of computation offloading of robotic tasks is more than feasible. Therefore, it is possible to use legacy embedded systems for computationally heavy tasks like navigation or artificial vision, hence extending its lifespan. In this chapter we apply Cloud Computing for building a Cloud-Based 3D Point Cloud extractor for stereo images. The objective is to have a dynamically scalable solution (one of Cloud Computing's most important features) and applicable to near real-time scenarios. This last feature brings several challenges that must be addressed: meeting of deadlines, stability, limitation of communication technologies. All those elements will be thoroughly analyzed in this chapter, providing experimental results that prove the efficacy of the solution. At the end of the chapter, a successful use case of the platform is explained: navigation assistance.
Stereo vision, Epipolar restriction, Image matching, Address-event-representation, Spike, Retina, Area-based method, Features-based method. Image processing in digital computer systems usually considers the visual information as a sequence of frames. These frames are from cameras that capture reality for a short period of time. They are renewed and transmitted at a rate of 25-30 fps (typical real-time scenario). Digital video processing has to process each frame in order to obtain a filter result or detect a feature on the input. In stereo vision, existing algorithms use frames from two digital cameras and process them pixel by pixel until it is found a pattern match in a section of both stereo frames. Spike-based processing is a relatively new approach that implements the processing by manipulating spikes one by one at the time they are transmitted, like a human brain. The mammal nervous system is able to solve much more complex problems, such as visual recognition by manipulating neuron's spikes. The spike-based philosophy for visual information processing based on the neuro-inspired Address-Event-Representation (AER) is achieving nowadays very high performances. In this work we study the existing digital stereo matching algorithms and how do they work. After that, we propose an AER stereo matching algorithm using some of the principles shown in digital stereo methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.