In this paper, we propose a method for matching biometric data from disparate domains. Specifically, we focus on the problem of comparing a low-resolution (LR) image with a high-resolution (HR) one. Existing coupled mapping methods do not fully exploit the HR information or they do not simultaneously use samples from both domains during training. To this end, we propose a method that learns coupled distance metrics in two steps. In addition, we propose to jointly learn two semi-coupled bases that yield optimal representations. In particular, the HR images are used to learn a basis and a distance metric that result in increased class-separation. The LR images are then used to learn a basis and a distance metric that map the LR data to their class-discriminated HR pairs. Finally, the two distance metrics are refined to simultaneously enhance the class-separation of both HR class-discriminated and LR projected images. We illustrate that different distance metric learning approaches can be employed in conjunction with our framework. Experimental results on Multi-PIE and SCface, along with the relevant hypothesis tests, provide evidence of the effectiveness of the proposed approach. Figure 4. Depiction of the ROC curves for the Multi-PIE database for Experiment 2 (color figure, best viewed in electronic format).