2007
DOI: 10.14358/pers.73.1.37
|View full text |Cite
|
Sign up to set email alerts
|

Land-cover Classification Using Radarsat and Landsat Imagery for St. Louis, Missouri

Abstract: This paper presents the potential of integrating radar data features with optical data to improve automatic land-cover mapping. For our study area of St. Louis, Missouri, Landsat ETMϩ and Radarsat images are orthorectified and co-registered to each other. A maximum likelihood classifier is utilized to determine different land-cover categories. Ground reference data from sites throughout the study area are collected for training and validation. The variations in classification accuracy due to a number of radar … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0
1

Year Published

2009
2009
2019
2019

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 38 publications
(17 citation statements)
references
References 14 publications
0
16
0
1
Order By: Relevance
“…Methods of image fusion can be grouped into three categories depending on the level at which the integration is performed: (i) pixel-level fusion (data fusion); (ii) feature fusion; and (iii) decision fusion. The first category refers to the combination of the original image pixels, while the second is based on combining features extracted from the individual datasets [46,[52][53][54]. In contrast, decision fusion requires preliminary analysis of the different datasets, e.g., the separate classifications of optical and SAR data, after which outputs are combined to generate a final result, e.g., [43,55,56].…”
Section: Land Covermentioning
confidence: 99%
“…Methods of image fusion can be grouped into three categories depending on the level at which the integration is performed: (i) pixel-level fusion (data fusion); (ii) feature fusion; and (iii) decision fusion. The first category refers to the combination of the original image pixels, while the second is based on combining features extracted from the individual datasets [46,[52][53][54]. In contrast, decision fusion requires preliminary analysis of the different datasets, e.g., the separate classifications of optical and SAR data, after which outputs are combined to generate a final result, e.g., [43,55,56].…”
Section: Land Covermentioning
confidence: 99%
“…We first resampled 30 m TM data to 1, 5, 10, and 15 m resolutions using nearest neighbor (NN) resampling to minimize loss of original pixel values at finer resolutions (Gardner et al, 2008;Khan et al, 1995;Raptis et al, 2003). We then used layerstacking to combine LiDAR surface models and TM data into three types of composite images at the five key resolutions: (1) CHM + nDSM + IS + TM, (2) CHM + nDSM (LiDAR structural) + TM, and (3) IS (LiDAR intensity) + TM (Huang et al, 2007). In addition, we produced a LiDAR only composite image by layerstacking CHM, nDSM, and IS models.…”
Section: Data Fusionmentioning
confidence: 99%
“…Bruno Cesar Pereira da Costa;Venerando Eustáquio Amaro & Anderson Targino da Silva Ferreira 2002, 2005Huang et al, 2007;Blanco et al, 2009). Deste modo, a integração entre as imagens ópticas e de micro-ondas, também conhecida como fusão de imagens, vem se firmando como uma técnica para maximizar a extração de informações relevantes que possibilite a espacialização, considerando os aspectos composicionais e estruturais, das espécies de uma floresta de mangue.…”
Section: Classificação De Espécies De Mangue No Nordeste Do Brasil Counclassified