In this paper, we perform a thorough observability analysis for linearized inertial navigation systems (INS) aided by exteroceptive range and/or bearing sensors (such as cameras, LiDAR and sonars) with different geometric features (points, lines and planes). While the observability of vision-aided INS (VINS) with point features has been extensively studied in the literature, we analytically show that the general aided INS with point features preserves the same observability property -that is, 4 unobservable directions, corresponding to the global yaw and the global position of the sensor platform. We further prove that there are at least 5 (and 7) unobservable directions for the linearized aided INS with a single line (and plane) feature; and, for the first time, analytically derive the unobservable subspace for the case of multiple lines/planes. Building upon this, we examine the system observability of the linearized aided INS with different combinations of points, lines and planes, and show that, in general, the system preserves at least 4 unobservable directions, while if global measurements are available, as expected, some unobservable directions diminish. In particular, when using plane features, we propose to use a minimal, closest point (CP) representation; and we also study in-depth the effects of 5 degenerate motions identified on observability. To numerically validate our analysis, we develop and evaluate both EKFbased visual-inertial SLAM and visual-inertial odometry (VIO) using heterogeneous geometric features in Monte Carlo simulations.
Related WorkAided INS is a classical research topic with significant body of literature [25] and has recently been reemerging in part due to the advancement of sensing and computing technologies. In this section, we briefly review the related literature closest to this work by focusing on the vision-aided INS.
Aided INS with Points, Lines, and PlanesAs mentioned earlier, vision-aided INS (VINS) arguably is among the most popular localization methods in particular for resource-constrained sensor platforms such as mobile devices and micro aerial vehicles (MAVs) navigating in GPS-denied environments (e.g., see [26,27,10,28]). While most current VINS algorithms focus on using point features (e.g., [7,8,9,10]), line and plane features may not be blindly discarded in structured environments [29,30,31,32,33,34,35,36,24], in part because: (i) they are ubiquitous and compact in many urban or indoor environments (e.g., doors, walls, and stairs), (ii) they can be detected and tracked over a relatively long time period, and (iii) they are more robust in texture-less environments compared to point features.In the case of utilizing line features, Kottas et al. [29] represented the line with a quaternion and a distance scalar and studied the observability properties for linearized VINS with this line parameterization. Yu et al. [30] proposed a minimal four-parameter representation of line features for VIO using rolling-shutter cameras, while Zheng et al. [31] used two end points t...