1996
DOI: 10.1109/34.506799
|View full text |Cite
|
Sign up to set email alerts
|

Covariance matrix estimation and classification with limited training data

Abstract: A new covariance matrix estimator useful for designing classifiers with limited training data is developed. In experiments, this estimator achieved higher classification accuracy than the sample covariance matrix and common covariance matrix estimates. In about half of the experiments, it achieved higher accuracy than regularized discriminant analysis, but required much less computation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
167
0

Year Published

2001
2001
2010
2010

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 273 publications
(167 citation statements)
references
References 2 publications
(3 reference statements)
0
167
0
Order By: Relevance
“…To make the above approach useful for machine learning, we still need to define a way to calculate the most convenient value for H. This can be achieved by means of the leave-one-out strategy [4,5,6]. The technique is to remove one of the training samples from X, compute SIR using the remaining samples (for a given value of H), and then compute the recognition rate of the sample that was omitted; i.e.…”
Section: The Optimal Value Of Hmentioning
confidence: 99%
“…To make the above approach useful for machine learning, we still need to define a way to calculate the most convenient value for H. This can be achieved by means of the leave-one-out strategy [4,5,6]. The technique is to remove one of the training samples from X, compute SIR using the remaining samples (for a given value of H), and then compute the recognition rate of the sample that was omitted; i.e.…”
Section: The Optimal Value Of Hmentioning
confidence: 99%
“…The notation \r conforms to the Hoffbeck and Landgrebe (1996) work. It indicates that the corresponding quantity is calculated with the r-th observation from class i removed.…”
Section: The Mixture Parametermentioning
confidence: 99%
“…From replacing the true values of the mean and the covariance matrix in equation (1) by their respective estimates, the Bayes decision rule achieves optimal classification accuracy only when the number of training samples increases toward infinity (e.g., Hoffbeck and Landgrebe, 1996). In fact for p-dimensional patterns the sample covariance matrix is singular if less than 1 + p training examples from each class i are available, that is, the sample covariance matrix can not be calculated if i k is less than the dimension of the feature space.…”
Section: Maximum Probability Classifiermentioning
confidence: 99%
See 2 more Smart Citations