We propose RLP-VIO-a robust and lightweight monocular visual-inertial odometry system using multiplane priors. With planes extracted from the point cloud, visual-inertial-plane PnP uses the plane information for fast localization. Depth estimation is susceptible to degenerated motion, so the planes are expanded in a reprojection consensus-based way robust to depth errors.For sensor fusion, our sliding-window optimization uses a novel structureless plane-distance error cost, which prevents the fill-in effect that poisons the BA problem's sparsity and permits the use of a smaller sliding window while maintaining good accuracy. The total computational cost is further reduced with our modified marginalization strategy. To further improve the tracking robustness, the landmark depths are constrained using the planes during degenerated motion. The whole system is parallelized with a three-stage pipeline. Under controlled environments, this parallelization runs deterministically and produces consistent results. The resulting VIO system is tested on widely used datasets and compared with several state-of-the-art systems. Our system achieves competitive accuracy and works robustly even on long and challenging sequences.To demonstrate the effectiveness of the proposed system, we also show the AR application running on mobile devices in real-time.