Real-time 6DOF (6 Degree of Freedom) pose estimation of an uncooperative spacecraft is an important part of proximity operations, e.g., space debris removal, spacecraft rendezvous and docking, on-orbit servicing, etc. In this paper, a novel efficient deep learning based approach is proposed to estimate the 6DOF pose of uncooperative spacecraft using monocular-vision measurement. Firstly, we introduce a new lightweight YOLO-liked CNN to detect spacecraft and predict 2D locations of the projected keypoints of a prior reconstructed 3D model in real-time. Then, we design two novel models for predicting the bounding box (bbox) reliability scores and the probability of keypoints existence. The two models not only significantly reduce the false positive, but also speed up convergence. Finally, the 6DOF pose is estimated and refined using Perspective-n-Point and geometric optimizer. Results demonstrate that the proposed approach achieves 73.2% average precision and 77.6% average recall for spacecraft detection on the SPEED dataset after only 200 training epochs. For the pose estimation task, the mean rotational error is 0.6812 • , and the mean translation error is 0.0320m. The proposed approach achieves competitive pose estimation performance and extreme lightweight (∼ 0.89 million learnable weights in total) on the SPEED dataset while being efficient for real-time applications.