Current locomotion algorithms in structured (indoor) 3D environments require an accurate localization. The several and diverse sensors typically embedded on legged robots (IMU, coders, vision and/or LIDARS) should make it possible if properly fused. Yet this is a difficult task due to the heterogeneity of these sensors and the real-time requirement of the control. While previous works were using staggered approaches (odometry at high frequency, sparsely corrected from vision and LIDAR localization), the recent progress in optimal estimation, in particular in visual-inertial localization, is paving the way to a holistic fusion. This paper is a contribution in this direction. We propose to quantify how a visual-inertial navigation system can accurately localize a humanoid robot in a 3D indoor environment tagged with fiducial markers. We introduce a theoretical contribution strengthening the formulation of Forster's IMU pre-integration, a practical contribution to avoid possible ambiguity raised by pose estimation of fiducial markers, and an experimental contribution on a humanoid dataset with ground truth. Our system is able to localize the robot with less than 2 cm errors once the environment is properly mapped. This would naturally extend to additional measurements corresponding to leg odometry (kinematic factors) thanks to the genericity of the proposed pre-integration algebra.
State estimation, in particular estimation of the base position, orientation and velocity, plays a big role in the efficiency of legged robot stabilization. The estimation of the base state is particularly important because of its strong correlation with the underactuated dynamics, i.e. the evolution of center of mass and angular momentum. Yet this estimation is typically done in two phases, first estimating the base state, then reconstructing the center of mass from the robot model. The underactuated dynamics is indeed not properly observed, and any bias in the model would not be corrected from the sensors. While it has already been observed that force measurements make such a bias observable, these are often only used for a binary estimation of the contact state. In this paper, we propose to simultaneously estimate the base and the underactuation state by using all measurements simultaneously. To this end, we propose several contributions to implement a complete state estimator using factor graphs. Contact forces altering the underactuated dynamics are pre-integrated using a novel adaptation of the IMU pre-integration method, which constitutes the principal contribution. IMU pre-integration is also used to measure the positional motion of the base. Encoder measurements are then participating to the estimation in two ways: by providing leg odometry displacements, contributing to the observability of IMU biases; and by relating the positional and centroidal states, thus connecting the whole graph and producing a tightly-coupled whole-body estimator. The validity of the approach is demonstrated on real data captured by the Solo12 quadruped robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.