Domain adaptation (DA), a particular case of transfer learning, is an effective technology for learning a discriminative model in scenarios where the data from the training (source) and the testing (target) domains share common class labels but follow different distributions. The differences between domains, called domain shifts, are caused by variations in the acquisition devices and environmental conditions, such as changing illuminations, pose, and collecting-device noises, that are related to a specific domain, denoted as domain-specific noises in this paper. The research on stacked denoising autoencoder (SDA) has demonstrated that noise-robust features could be learned through training a model to reduce the manmade (simulated) noises. However, little research has been conducted to learn the domain-invariant features through training SDA to reduce the domain-specific noises from the real word. In this paper, we propose a novel variant of SDA for DA, called the stacked local constraint auto-encoder (SLC-AE), which aims to learn domain-invariant features through iteratively optimizing the SDA and the low-dimensional manifold. The core idea behind the SLC-AE is that both the source and target samples are corrupted due to the domain-specific noises, and each corrupted sample could be de-noised by calculating the weighted sum of its neighbor samples defined on the intrinsic manifold. Because the neighbor samples on the intrinsic manifold are semantically similar, their weighted sum preserves the generic information and reduces the domain-specific noises. To properly evaluate the performance of the SLC-AE, we conducted extensive experiments using seven benchmark data sets, i.e., MNIST, USPS, COIL20, SYN SIGNS, GTSRB, MSRC and VOC 2007. Compared to twelve different state-of-the-art methods, the experimental results demonstrated that the proposed SLC-AE model made significant improvement over the performance of SDA and achieved the best average performance on the seven data sets.