Developing an accurate onboard‐camera pose estimation is one major challenge of satellite systems, and the attempt of improving remote sensing camera pose accuracy never ceases. The camera pose can be recovered by aligning a captured 2D image and a 3D digital surface model of the corresponding scene. In this paper, a novel camera pose estimation method from captured images with the over known real scene 3D products is proposed to enhance remote sensing camera attitude accuracy. The purpose of this estimation is to determine the pose of a camera purely from an image based on a known 3D model, where 3D products of very high spatial resolution are projected onto image space by virtual camera system with error‐contained initial exterior orientation parameters, and whether the pose of the camera can be determined precisely depends on the 2D–3D registration result. The process consists of two steps: (1) feature extraction and (2) similarity measure and registration. Furthermore, the proposed method revises the rotation matrix and translation vector by utilizing formulation based on quaternion representation of rotation, respectively. We evaluate our method on challenging simulation data and results show that acceptable accuracy of camera pose can be achieved.