1990
DOI: 10.1080/00949659008811211
|View full text |Cite
|
Sign up to set email alerts
|

On a multiple observations model in discriminant analysis

Abstract: In this paper the multiple observations, or two factor mixed hierarchical, model for the classification problem has been studied. Under this model the Bayes classification statistic and some of its properties ;re discussed. it will be xeil that the muliipie observations modei includes the fixed eifects modei as a special case and bears an interesting relationship to the random effects model, The mul!iplr observations model has a potential application in a variety of fields. In medicine, for example, a patient … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0
1

Year Published

1999
1999
2007
2007

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 9 publications
0
2
0
1
Order By: Relevance
“…To describe these relations we use the matrix-variate normal distribution (Gupta and Nagar 2000). To the best of our knowledge, matrix-variate normal distributions have been used in classification only in the context of repeated measurements, where the same set of n variables is observed on each sample member more than once (Choi 1972;Gupta 1986;Gupta and Logan 1990). However, in the majority of practical cases each sample is observed only once, and this is the case that we study in the present paper.…”
Section: Introductionmentioning
confidence: 93%
“…To describe these relations we use the matrix-variate normal distribution (Gupta and Nagar 2000). To the best of our knowledge, matrix-variate normal distributions have been used in classification only in the context of repeated measurements, where the same set of n variables is observed on each sample member more than once (Choi 1972;Gupta 1986;Gupta and Logan 1990). However, in the majority of practical cases each sample is observed only once, and this is the case that we study in the present paper.…”
Section: Introductionmentioning
confidence: 93%
“…This step draws  (t+1) from P(Â|y obs ; y (t+1) mis ). This creates a Markov chain (y (1) mis ;  (1) ); (y (2) mis ;  (2) ); (y (3) mis ;  (3) ); : : :…”
Section: The Posteriorunclassified
“…Gupta (1986) extended the method in the multivariate scenario. Gupta and Logan (1990) have also studied the classical approach to this problem. Also, Bayesiau predictive discrimination using a diffuse prior for the mean vector and the variance-covariance matrices has been studied by Logan and Gupta (1993).…”
Section: L'itroductionmentioning
confidence: 99%