2000
DOI: 10.1023/a:1004129706000
|View full text |Cite
|
Sign up to set email alerts
|

Discriminant Analysis When a Block of Observations is Missing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
23
2
1

Year Published

2006
2006
2017
2017

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(26 citation statements)
references
References 11 publications
0
23
2
1
Order By: Relevance
“…We consider the problem of classifying an unlabeled observation vector ( ) , when monotone missing training data are present, where i µ and Σ are the th i population mean vector and common covariance matrix, respectively. Here, we re-compare two linear classification procedures for block monotone missing (BMM) training data: one classifier is from [1], and the other classifier employs the maximum likelihood estimator (MLE).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…We consider the problem of classifying an unlabeled observation vector ( ) , when monotone missing training data are present, where i µ and Σ are the th i population mean vector and common covariance matrix, respectively. Here, we re-compare two linear classification procedures for block monotone missing (BMM) training data: one classifier is from [1], and the other classifier employs the maximum likelihood estimator (MLE).…”
Section: Introductionmentioning
confidence: 99%
“…Monotone missing data occur for an observation vector j x when, if ji x is missing, then jk x is missing for all k i > . The authors [1] claim that their "linear combination classification procedure is better than the substitution methods (MLE) as the proportion of missing observations gets larger" when block monotone missing data are present in the training data. Specifically, [1] has performed a Monte Carlo simulation and has concluded that their classifier performs better in terms of the expected error rate (EER) than the MLE substitution (MLES) classifier formulated by [2] as the proportion of missing observations increases.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Schafer, 1997, p. 218). There is an increasing interest in the development of statistical methods for handling monotone missing data from normal or elliptical populations (cf., for instance, Batsidis and Zografos, 2006;Chung and Han, 2000;Hao and Krishnamoorthy, 2001;Kanda and Fujikoshi, 1998;Krishnamoorthy and Pannala, 1998;and references therein).…”
Section: Introductionmentioning
confidence: 99%
“…From the simulations, Chung and Han (2000) showed that the linear combination classification is better than Anderson's procedure (Anderson, 1957), the EM algorithm (Dempster et al, 1977), and Hocking and Smith procedure (Hocking and Smith, 1968) as the proportion of missing observation gets lager.…”
mentioning
confidence: 99%