Although traditional dictionary learning (DL) methods have made great success in pattern recognition and machine learning, it is extremely time-consuming, especially in the training stage. The projective dictionary pair learning (DPL) learned the synthesis dictionary and the analysis dictionary jointly to achieve a fast and accurate classifier. However, the dictionary pair is initialized as random matrices without using any data samples information, it required many iterations to ensure convergence. In this paper, we propose a novel compact DPL and refining method based on the observation that the eigenvalue curve of sample data covariance matrix usually decrease very fast, which means we can compact the synthesis dictionary and analysis dictionary. For each class of the data samples, we utilize the principal components analysis (PCA) to retain global important information and compact the row space of a synthesis dictionary and the column space of an analysis dictionary in the first stage. We further refine the learned dictionary pair to achieve a more accurate classifier during compact dictionary pair refining, which combines the orthogonality of PCA with the redundancy of DL. We solve this refining problem in closed-form completely, naturally reducing the computation complexity significantly. Experimental results on the Extended YaleB database and AR database show that the proposed method achieves competitive accuracy and low computational complexity compared with other state-of-the-art methods.