Remote sensing image classification is of great importance for urban development and planning. The need for higher classification accuracy has led to improvements in classification technology. In this research, Landsat 8 images are used as experimental data, and Wuhan, Chengde and Tongchuan are selected as research areas. The best neighborhood window size of the image patch and band combination method are selected based on two sets of comparison experiments. Then, an object-oriented convolutional neural network (OCNN) is used as a classifier. The experimental results show that the classification accuracy of the OCNN classifier is 6% higher than that of an SVM classifier and 5% higher than that of a convolutional neural network classifier. The graph of the classification results of the OCNN is more continuous than the plots obtained with the other two classifiers, and there are few fragmentations observed for most of the category. The OCNN successfully solves the salt and pepper problem and improves the classification accuracy to some extent, which verifies the effectiveness of the proposed object-oriented model.
The extracted building information can be widely applied in urban planning, land resource management, and other related fields. This paper proposes a novel method for building extraction, which aims to improve the accuracy of the extraction process. The method combines a bi-directional feature pyramid with a location-channel attention feature serial fusion module (L-CAFSFM). By using the ResNeXt101 network, more precise and abundant building features are extracted. The L-CAFSFM combines and calculates the adjacent two-level feature maps, and the iteration process from high-level to low-level and from low-level to high-level enhances the feature extraction ability of the model at different scales and levels. We use the DenseCRF algorithm to refine the correlation between pixels. The performance of our method is evaluated on the Wuhan University building dataset (WHU), and the experimental results show that the precision, F-score, recall rate, and IoU of our method are 94.94%, 94.32%, 93.70%, and 89.25%, respectively. Compared with the baseline network, our method achieves a more accurate performance in extracting buildings from high-resolution images. The proposed method can be widely applied in urban planning, land resource management, and other related fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.