2022
DOI: 10.3390/rs14112688
|View full text |Cite
|
Sign up to set email alerts
|

A Land Use Classification Model Based on Conditional Random Fields and Attention Mechanism Convolutional Networks

Abstract: Land use is used to reflect the expression of human activities in space, and land use classification is a way to obtain accurate land use information. Obtaining high-precision land use classification from remote sensing images remains a significant challenge. Traditional machine learning methods and image semantic segmentation models are unable to make full use of the spatial and contextual information of images. This results in land use classification that does not meet high-precision requirements. In order t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…The remote sensing data sources used include MODIS [13,14], the Landsat series [15][16][17][18], and DMSP/OLS nighttime light data [19][20][21][22], but the spatial resolution of these remote sensing products is relatively low. High-resolution remote sensing data sources include QuickBird, IKONOS, and Gaofen2 [23][24][25]; these products are very expensive, and Sentinel-2, which is free and has high resolution, has therefore become the data source of choice for many studies [26]. By processing remote-sensing images and extracting remote-sensing information and features, the boundaries of built-up urban areas can be better extracted [10].…”
Section: Introductionmentioning
confidence: 99%
“…The remote sensing data sources used include MODIS [13,14], the Landsat series [15][16][17][18], and DMSP/OLS nighttime light data [19][20][21][22], but the spatial resolution of these remote sensing products is relatively low. High-resolution remote sensing data sources include QuickBird, IKONOS, and Gaofen2 [23][24][25]; these products are very expensive, and Sentinel-2, which is free and has high resolution, has therefore become the data source of choice for many studies [26]. By processing remote-sensing images and extracting remote-sensing information and features, the boundaries of built-up urban areas can be better extracted [10].…”
Section: Introductionmentioning
confidence: 99%
“…In urban planning, it has been utilized for road detection, 5 building change detection, 6 illegal building detection, 7 3D scene reconstruction. 8,9 In the field of land cover mapping, 10 it has been used to detect forest land, agricultural land, bare land, irrigated land, and dry land. As the classification of ground objects becomes increasingly refined, the demand for more advanced and sophisticated classification and segmentation algorithms has significantly increased.…”
Section: Introductionmentioning
confidence: 99%
“…Due to its advanced classification capabilities, semantic segmentation has found wide application in many fields, such as landslide monitoring, 1 , 2 plant disease monitoring, 3 and water quality monitoring, 4 in the field of disaster monitoring. In urban planning, it has been utilized for road detection, 5 building change detection, 6 illegal building detection, 7 3D scene reconstruction 8 , 9 . In the field of land cover mapping, 10 it has been used to detect forest land, agricultural land, bare land, irrigated land, and dry land.…”
Section: Introductionmentioning
confidence: 99%
“…However, the heterogeneity of land-use features in remote-sensing images and the confusion of image elements usually lead to a decrease in urban land-use classification accuracy. For example, the spectral characteristics of commercial and residential areas in remote-sensing images are very similar, and it is Land 2022, 11, 2209 2 of 18 difficult to further improve the accuracy of land-use classification while relying only on remote-sensing images [14,15].…”
Section: Introductionmentioning
confidence: 99%