As population aging is becoming more common worldwide, applying artificial intelligence into the diagnosis of Alzheimer’s disease (AD) is critical to improve the diagnostic level in recent years. In early diagnosis of AD, the fusion of complementary information contained in multimodality data (e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebrospinal fluid (CSF)) has obtained enormous achievement. Detecting Alzheimer’s disease using multimodality data has two difficulties: (1) there exists noise information in multimodal data; (2) how to establish an effective mathematical model of the relationship between multimodal data? To this end, we proposed a method named LDF which is based on the combination of low-rank representation and discriminant correlation analysis (DCA) to fuse multimodal datasets. Specifically, the low-rank representation method is used to extract the latent features of the submodal data, so the noise information in the submodal data is removed. Then, discriminant correlation analysis is used to fuse the submodal data, so the complementary information can be fully utilized. The experimental results indicate the effectiveness of this method.