This paper presents the design of a new lightweight, hyper-redundant, deployable Binary Robotic Articulated Intelligent Device (BRAID), for space robotic systems. The BRAID is intended to meet the challenges of future space robotic systems that need to perform more complex tasks than are currently feasible. It is lightweight, has a high degree of freedom, and has a large workspace. The device is based on embedded muscle type binary actuators and flexure linkages. Such a system may be used for a wide range of tasks, and requires minimal control computation and power resources.
In field environments it is often not possible to provide robot teams with detailed a priori environment and task models. In such unstructured environments, robots will need to create a dimensionally accurate three-dimensional geometric model of its surroundings by performing appropriate sensor actions. However, uncertainties in robot locations and sensing limitations/occlusions make this difficult. A new algorithm, based on iterative sensor planning and sensor redundancy, is proposed to build a geometrically consistent dimensional map of the environment for mobile robots that have articulated sensors. The aim is to acquire new information that leads to more detailed and complete knowledge of the environment. The robot(s) is controlled to maximize geometric knowledge gained of its environment using an evaluation function based on Shannon's information theory. Using the measured and Markovian predictions of the unknown environment, an information theory based metric is maximized to determine a robotic agent's next best view (NBV) of the environment. Data collected at this NBV pose are fused using a Kalman filter statistical uncertainty model to the measured environment map. The process continues until the environment mapping process is complete. The work is unique in the application of information theory to enhance the performance of environment sensing robot agents. It may be used by multiple distributed and decentralized sensing agents for efficient and accurate cooperative environment modeling. The algorithm makes no assumptions of the environment structure. Hence, it is robust to robot failure since the environment model being built is not dependent on any single agent frame, but is set in an absolute reference frame. It accounts for sensing uncertainty, robot motion uncertainty, environment model uncertainty and other critical parameters. It allows for regions of higher interest receiving greater attention by the agents. This algorithm is particularly well suited to unstructured environments, where sensor uncertainty and occlusions are significant. Simulations and experiments show the effectiveness of this algorithm.
To meet the objectives of many future missions, robots will need to be adaptable and reconfigurable. A concept for such a robotic system has been proposed previously based on using a large number of simple binary actuators. Previous researchers have addressed some of the issues brought up by robots with a few binary actuators. This paper examines the computational feasibility of controlling and planning such binary robotic systems with a large number of actuators, including computation of their workspace, forward kinematics, inverse kinematics and trajectory following. Methods are proposed and evaluated by simulation. Detailed error analysis and computational requirements are presented. An example of the planning for a binary walking robot is presented.
In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robot's visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannon's information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.