Deep learning based retinopathy classification with optical coherence tomography (OCT) images has recently attracted great attention. However, existing deep learning methods fail to work well when training and testing datasets are different due to the general issue of domain shift between datasets caused by different collection devices, subjects, imaging parameters, etc. To address this practical and challenging issue, we propose a novel deep domain adaptation (DDA) method to train a model on a labeled dataset and adapt it to an unlabelled dataset (collected under different conditions). It consists of two modules for domain alignment, that is, adversarial learning and entropy minimization. We conduct extensive experiments on three public datasets to evaluate the performance of the proposed method. The results indicate that there are large domain shifts between datasets, resulting a poor performance for conventional deep learning methods. The proposed DDA method can significantly outperform existing methods for retinopathy classification with OCT images. It achieves retinopathy classification accuracies of 0.915, 0.959 and 0.990 under three cross‐domain (cross‐dataset) scenarios. Moreover, it obtains a comparable performance with human experts on a dataset where no labeled data in this dataset have been used to train the proposed DDA method. We have also visualized the learnt features by using the t‐distributed stochastic neighbor embedding (t‐SNE) technique. The results demonstrate that the proposed method can learn discriminative features for retinopathy classification.