Eye-tracking technology has become a powerful tool for biomedical-related applications due to its simplicity of operation and low requirements on patient language skills. This study aims to use the machine-learning models and deep-learning networks to identify key features of eye movements in Alzheimer's Disease (AD) under specific visual tasks, thereby facilitating computer-aided diagnosis of AD. Firstly, a three-dimensional (3D) visuospatial memory task is designed to provide participants with visual stimuli while their eye-movement data are recorded and used to build an eye-tracking dataset. Then, we propose a novel deep-learning-based model for identifying patients with Alzheimer's Disease (PwAD) and healthy controls (HCs) based on the collected eye-movement data. The proposed model utilizes a nested autoencoder network to extract the eye-movement features from the generated fixation heatmaps and a weight adaptive network layer for the feature fusion, which can preserve as much useful information as possible for the final binary classification. To fully verify the performance of the proposed model, we also design two types of models based on traditional machine-learning and typical deep-learning for comparison. Furthermore, we have also done ablation experiments to verify the effectiveness of each module of the proposed network. Finally, these models are evaluated by four-fold cross-validation on the built eye-tracking dataset. The proposed model shows 85% average accuracy in AD recognition, outperforming machine-learning methods and other typical deep-learning networks.