2017
DOI: 10.3390/rs9090939
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection Solution with High Dimensionality and Low-Sample Size for Land Cover Classification in Object-Based Image Analysis

Abstract: Land cover information extraction through object-based image analysis (OBIA) has become an important trend in remote sensing, thanks to the increasing availability of high-resolution imagery. Segmented objects have a large number of features that cause high-dimension and low-sample size problems in the classification process. In this study, on the basis of a partial least squares generalized linear regression (PLSGLR), we propose a group corrected PLSGLR, known as G-PLSGLR, that aims to reduce the redundancy o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 50 publications
0
10
0
Order By: Relevance
“…These findings indicate a feature selection problem that occurs regardless of the classification algorithm. An increased number of features, such as the image bands, generally improves the accuracy; however, it increases the amount of training samples required [24]. Considering the limited training sample availability in most conditions and the nonlinear response of the LCU classes across several bands, which is known as the "Hughes effect" [25], there is a need for a method to select a subset of relevant features from the original dataset to improve the classification process and achieve a dimension reduction [26].One alternative to overcome the above-mentioned problem is to perform linear transformation methods (such as principal components analysis (PCA) and independent component analysis (ICA)), or nonlinear algorithms (such as locality adaptive discriminant analysis (LADA) and multiple marginal fisher analysis (MMFA)), to remove the correlations and higher-order dependences in the image bands and use the produced components as input data for classification, to simplify and improve the process.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…These findings indicate a feature selection problem that occurs regardless of the classification algorithm. An increased number of features, such as the image bands, generally improves the accuracy; however, it increases the amount of training samples required [24]. Considering the limited training sample availability in most conditions and the nonlinear response of the LCU classes across several bands, which is known as the "Hughes effect" [25], there is a need for a method to select a subset of relevant features from the original dataset to improve the classification process and achieve a dimension reduction [26].One alternative to overcome the above-mentioned problem is to perform linear transformation methods (such as principal components analysis (PCA) and independent component analysis (ICA)), or nonlinear algorithms (such as locality adaptive discriminant analysis (LADA) and multiple marginal fisher analysis (MMFA)), to remove the correlations and higher-order dependences in the image bands and use the produced components as input data for classification, to simplify and improve the process.…”
mentioning
confidence: 99%
“…These findings indicate a feature selection problem that occurs regardless of the classification algorithm. An increased number of features, such as the image bands, generally improves the accuracy; however, it increases the amount of training samples required [24]. Considering the limited training sample availability in most conditions and the nonlinear response of the LCU classes across several bands, which is known as the "Hughes effect" [25], there is a need for a method to select a subset of relevant features from the original dataset to improve the classification process and achieve a dimension reduction [26].…”
mentioning
confidence: 99%
“…Feature selection can improve the performance of object-based image classification [11]. To select features under low sample size and high dimensionality, we adapted a revised version of our previous work [14], which is designed to provide a solution to this problem. The previous work, namely the group-corrected partial least squares generalized linear regression (PLSGLR) method, can be described in three steps: (1) Group features based on Pearson's correlation coefficient; (2) rank features by PLSGLR and remove insignificant features; (3) reconstruct categories when the features are added one-by-one to calculate the Bayesian information criterion.…”
Section: Feature Selection From Small-size Samplesmentioning
confidence: 99%
“…Compared with pixels that provides spectral information, image objects contain additional information on spectra, geometry, context, and texture. Thus, OBIA leads to sample sparsity in high dimensional data space, which increases the needs for larger sample size [14]. Although some machine learning algorithms, which are popular in supervised remote sensing classification, are tolerable to insufficient sample size in high dimensions, studies show that the sample size leads to larger variations in accuracy than the algorithms themselves [10,15].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, with the addition of texture and shape information, object-based classification can employ more features in recognizing target objects [10][11][12][13]. However, despite its advantages over traditional classifiers, the use of GEOBIA is also constrained by uncertainties associated with image segmentation [14], feature selection [15,16] and classification algorithms [17,18]. Segmentation is an essential process in GEOBIA, and the definition of segmentation parameters could have a significant influence on classification accuracy [19][20][21].…”
Section: Introductionmentioning
confidence: 99%