2007
DOI: 10.1109/tgrs.2007.892598
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Inverse Covariance Estimates for Hyperspectral Image Classification

Abstract: Abstract-Classification of remotely sensed hyperspectral images calls for a classifier that gracefully handles high-dimensional data, where the amount of samples available for training might be very low relative to the dimension. Even when using simple parametric classifiers such as the Gaussian maximum-likelihood rule, the large number of bands leads to copious amounts of parameters to estimate. Most of these parameters are measures of correlations between features. The covariance structure of a multivariate … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
18
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(18 citation statements)
references
References 24 publications
0
18
0
Order By: Relevance
“…Constraints such as the assumption that features are uncorrelated, well known as a naïve bayes classifier, reduces the number of parameters to estimate for each distribution down to the dimensionality. We recently [2] proposed an approach for reducing the number of parameters needed to estimate when designing classifiers in high dimensional feature spaces, sparse cholesky triangle inverse covariance (STIC) estimates. The method is based on time series theory regarding the Cholesky decomposition of the inverse covari-…”
Section: Parameter Sparsing In Full Dimensionmentioning
confidence: 99%
See 2 more Smart Citations
“…Constraints such as the assumption that features are uncorrelated, well known as a naïve bayes classifier, reduces the number of parameters to estimate for each distribution down to the dimensionality. We recently [2] proposed an approach for reducing the number of parameters needed to estimate when designing classifiers in high dimensional feature spaces, sparse cholesky triangle inverse covariance (STIC) estimates. The method is based on time series theory regarding the Cholesky decomposition of the inverse covari-…”
Section: Parameter Sparsing In Full Dimensionmentioning
confidence: 99%
“…The search is guided by ten-fold cross-validation (10-CV) as a performance measure. Further details of this approach can be found in [2].…”
Section: Parameter Sparsing In Full Dimensionmentioning
confidence: 99%
See 1 more Smart Citation
“…While shrinkage is the most common regularization scheme, sparse 15,16 and sparse transform [17][18][19][20] methods have also been proposed. To deal with the non-Gaussian nature of most data, both robust 21 and anti-robust 22 estimators have been proposed.…”
Section: Introductionmentioning
confidence: 99%
“…These observations in theory and practice motivate the wide use of the sparse inverse covariance estimation. It has been shown to be useful in various applications, including evaluating patterns of association among variables [9], exploration of genetic networks [17], senator voting records analysis [2], hyperspectral image classification [4], and speech recognition [5].…”
Section: Introductionmentioning
confidence: 99%