Remote-sensing images play a crucial role in a wide range of applications and have been receiving significant attention. In recent years, great efforts have been made in developing various methods for intelligent interpretation of remote-sensing images. Generally speaking, machine learning-based methods of remote-sensing image interpretation require a large number of labeled samples and there are still not enough annotated datasets in the field of remote sensing. However, manual annotation of remote-sensing images is usually labor-intensive and requires expert knowledge and the accuracy of annotation results is relatively low. The goal of this paper is to propose a novel tile-level annotation method of remote-sensing images to obtain remote-sensing datasets which are well-labeled and contain accurate semantic concepts. Firstly, we use a set of images with defined semantic concepts to represent the training set and divide them into several nonoverlapping regions. Secondly, the color features, texture features, and spatial features of each region are extracted, and discriminative features are obtained by the weight optimization feature fusion method. Then, the features are quantized into visual words by applying a density-based clustering center selection method and an isolated feature point elimination method. And the remote-sensing images can be represented by a series of visual words. Finally, the LDA model is used to calculate the probabilities of semantic categories for each region. The experiments are conducted on remote-sensing images which demonstrate that our proposed method can achieve good performance on remote-sensing image tile-level annotation. The implications of our research can obtain annotated datasets with accurate semantic concepts for intelligent interpretation of remote-sensing images.