Computer aided diagnosis (CAD) systems play an essential role in the early detection and diagnosis of developing disease for medical applications. In order to obtain the highly recognizable representation for the medical images, a self-adaptive discriminative autoencoder (SADAE) is proposed in this paper. The proposed SADAE system is implemented under a deep metric learning framework which consists of K local autoencoders, employed to learn the K subspaces that represent the diverse distribution of the underlying data, and a global autoencoder to restrict the spatial scale of the learned representation of images. Such community of autoencoders is aided by a self-adaptive metric learning method that extracts the discriminative features to recognize the different categories in the given images. The quality of the extracted features by SADAE is compared against that of those extracted by other state-of-the-art deep learning and metric learning methods on five popular medical image data sets. The experimental results demonstrate that the medical image recognition results gained by SADAE are much improved over those by the alternatives.