Remote sensing image (RSI) scene classification is the foundation and important technology of ground object detection, land use management and geographic analysis. During recent years, convolutional neural networks (CNNs) have achieved significant success and are widely applied in RSI scene classification. However, crafted images that serve as adversarial examples can potentially fool CNNs with high confidence and are hard for human eyes to interpret. For the increasing security and robust requirements of RSI scene classification, the adversarial example problem poses a serious problem for the classification results derived from systems using CNN models, which has not been fully recognized by previous research. In this study, to explore the properties of adversarial examples of RSI scene classification, we create different scenarios by testing two major attack algorithms (i.e., the fast gradient sign method (FGSM) and basic iterative method (BIM)) trained on different RSI benchmark datasets to fool CNNs (i.e., InceptionV1, ResNet and a simple CNN). In the experiment, our results show that CNNs of RSI scene classification are also vulnerable to adversarial examples, and some of them have a fooling rate of over 80%. These adversarial examples are affected by the architecture of CNNs and the type of RSI dataset. InceptionV1 has a fooling rate of less than 5%, which is lower than the others. Adversarial examples generated on the UCM dataset are easier than other datasets. Importantly, we also find that the classes of adversarial examples have an attack selectivity property. Misclassifications of adversarial examples of RSIs are related to the similarity of the original classes in the CNN feature space. Attack selectivity reveals potential classes of adversarial examples and provides insights into the design of defensive algorithms in future research.