We present a data processing pipeline to online estimate ego-motion and build a map of the traversed environment, leveraging data from a 3D laser scanner, a camera, and an inertial measurement unit (IMU). Different from traditional methods that use a Kalman filter or factor-graph optimization, the proposed method employs a sequential, multilayer processing pipeline, solving for motion from coarse to fine.Starting with IMU mechanization for motion prediction, a visual-inertial coupled method estimates motion; then, a scan matching method further refines the motion estimates and registers maps. The resulting system enables high-frequency, lowlatency ego-motion estimation, along with dense, accurate 3D map registration.Further, the method is capable of handling sensor degradation by automatic reconfiguration bypassing failure modules. Therefore, it can operate in the presence of highly dynamic motion as well as in the dark, texture-less, and structure-less environments. During experiments, the method demonstrates 0.22% of relative position drift over 9.3 km of navigation and robustness w.r.t. running, jumping, and even highway speed driving (up to 33 m/s).mapping, ego-motion estimation, 3D laser scanner, vision, inertial sensor 1 | INTRODUCTION This paper aims at developing a method for online ego-motion estimation with data from a 3D laser scanner, a camera, and an inertial measurement unit (IMU). The estimated motion further registers laser points to build a map of the traversed environment. In many real-world applications, ego-motion estimation and mapping must be conducted in real time. In an autonomous navigation system, the map is crucial for motion planning and obstacle avoidance, while the motion estimation is important for vehicle control and maneuver.Very often, high-accuracy global positioning system (GPS)-inertial navigation system (INS) solutions are impractical when the application is GPS-denied, lightweight, or cost-sensitive. The proposed method utilizes only perception sensors without reliance on GPS.Specially, we are interested in solving for extremely aggressive motion. Yet, it remains nonobvious how to solve the problem reliably in 6 degrees of freedom (DOF), in real time, and in a small form factor. The problem is closely relevant to sensor degradation due to sparsity of the data during dynamic maneuver. To the best of our knowledge, the proposed method is by far the first to enable such high-rate ego-motion estimation capable of handling running and jumping (see Figure 1 as an example), while at the same time develop a dense, accurate 3D map, in the field under various lighting and structural conditions, and using only sensing and computing devices that can be easily carried by a person.The key reason that enables this level of performance is our novel way of data processing. As shown in Figure 2a, a standard Kalman filter-based method typically uses IMU mechanization in a prediction step followed by update steps seeded with individual visual features or laser landmarks. On the other hand, a fa...