This paper introduces ANYmal, a quadrupedal robot that features outstanding mobility and dynamic motion capability. Thanks to novel, compliant joint modules with integrated electronics, the 30 kg, 0.5 m tall robotic dog is torque controllable and very robust against impulsive loads during running or jumping. The presented machine was designed with a focus on outdoor suitability, simple maintenance, and user-friendly handling to enable future operation in real world scenarios. Performance tests with the joint actuators indicated a torque control bandwidth of more than 70 Hz, high disturbance rejection capability, as well as impact robustness when moving with maximal velocity. It is demonstrated in a series of experiments that ANYmal can execute walking gaits, dynamically trot at moderate speed, and is able to perform special maneuvers to stand up or crawl very steep stairs. Detailed measurements unveil that even full-speed running requires less than 280 W, resulting in an autonomy of more than 2 h.
This paper provides a system overview about ANYmal, a quadrupedal robot developed for operation in harsh environments. The 30 kg, 0.5 m tall robotic dog was built in a modular way for simple maintenance and user-friendly handling, while focusing on high mobility and dynamic motion capability. The system is tightly sealed to reach IP67 standard and protected to survive falls. Rotating lidar sensors in the front and back are used for localization and terrain mapping and compact force sensors in the feet provide accurate measurements about the contact situations. The variable payload, such as a modular pan-tilt head with a variety of inspection sensors, can be exchanged depending on the application. Thanks to novel, compliant joint modules with integrated electronics, ANYmal is precisely torque controllable and very robust against impulsive loads during running or jumping. In a series experiments we demonstrate that ANYmal can execute various climbing maneuvers, walking gaits, as well as a dynamic trot and jump. As special feature, the joints can be fully rotated to switch between X-and O-type kinematic configurations. Detailed measurements unveil a low energy consumption of 280 W during locomotion, which results in an autonomy of more than 2 h.
Abstract-This paper presents a framework for planning safe and efficient paths for a legged robot in rough and unstructured terrain. The proposed approach allows to exploit the distinctive obstacle negotiation capabilities of legged robots, while keeping the complexity low enough to enable planning over considerable distances in short time. We compute typical terrain characteristics such as slope, roughness, and steps to build a traversability map. This map is used to assess the costs of individual robot footprints as a function of the robot-specific obstacle negotiating capabilities for steps, gaps and stairs. Our sampling-based planner employs the RRT* algorithm to optimize path length and safety. The planning framework has a hierarchical architecture to frequently replan the path during execution as new terrain is perceived with onboard sensors. Furthermore, a cascaded planning structure makes use of different levels of simplification to allow for fast search in simple environments, while retaining the ability to find complex solutions, such as paths through narrow passages. The proposed navigation planning framework is integrated on the quadrupedal robot StarlETH and extensively tested in simulation as well as on the real platform.
This work approaches the problem of controlling quadrupedal running and jumping motions with a parametrized, model-based, state-feedback controller. Inspired by the motor learning principles observed in nature, our method automatically fine tunes the parameters of our controller by repeatedly executing slight variations of the same motion task. This learn-through-practice process is performed in simulation in order to best exploit computational resources and to prevent the robot from damaging itself. In order to ensure that the simulation results match the behavior of the hardware platform sufficiently well, we introduce and validate an accurate model of the compliant actuation system. The proposed method is experimentally verified on the torque-controllable quadruped robot StarlETH by executing squat jumps and dynamic gaits such as a running trot, pronk and a bounding gait.
Abstract. In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a leastsquares term for the data fidelity and an image prior based on a lowerbounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.