Abstract-This paper addresses the problem of visual-inertial navigation when processing camera observations of both point and line features detected within a Manhattan world. First, we prove that the observations of: (i) a single point, and (ii) a single line of known direction perpendicular to gravity (e.g., a non-vertical structural line of a building), provide sufficient information for rendering all degrees of freedom of a visionaided inertial navigation system (VINS) observable, up to global translations. Next, we examine the observability properties of the linearized system employed by an extended Kalman filter (EKF) for processing line observations of known direction, and show that the rank of the corresponding observability matrix erroneously increases. To address this problem, we introduce an elegant modification which enforces that the linearized EKF system has the correct number of unobservable directions, thus improving its consistency. Finally, we validate our findings experimentally in urban scenes and demonstrate the superior performance of the proposed VINS over alternative approaches.