In this paper, a novel multi-information fusion methodology is proposed for vehicle pose estimation. The purpose is to improve the pose estimation accuracy during Global Navigation Satellite System (GNSS) outages, mainly from two aspects: 1) extra observations of accurate velocities and angular rates without cumulative errors; 2) adaptive and intelligent fusion. Firstly, a multidimensional motion perception network (MMPN) is designed in order to estimate the velocities and angular rates. At the same time, the motion state is also output. The inputs of the network are two consecutive images. Thus, the estimated velocities and angular rates are not affected by cumulative errors. Besides, because the corresponding velocities and angular rates measured by high-quality Inertial Measurement Unit (IMU) with low noise and drift are used as the training samples, the accuracy of the estimated velocities and angular rates are guaranteed. Further, an Unscented Kalman filter with Federal Structure (UKF-F) with different motion models is designed for pose estimation. The local filters of UKF-F use the current probabilities of belonging to the motion patterns output by the MMPN as the information-sharing factors. Thus, the fusion is adaptive to the actual vehicle motion state. In addition, a Gate Recurrent Unit (GRU)-based error compensation module is introduced to further reduce the position errors during GNSS outages. The GRU-based module is trained offline with the input of MMPN and IMU observations. Due to the advantages of time series forecasting, the GRU-based module can accurately predict the position error of IMU/MMPN integration online. The KITTI dataset with different driving scenarios is used to confirm the feasibility of the proposed methodology. The results of the experiments verify that the proposed methodology can achieve precise vehicle pose estimation during GNSS outages.