1978
DOI: 10.1177/001316447803800105
|View full text |Cite
|
Sign up to set email alerts
|

The Use of Chance Corrected Percentage of Agreement to Interpret the Results of a Discriminant Analysis

Abstract: Most programs for performing discriminant analysis provide a summary table of hits and misses in predicting group membership by using the discriminant function. The interpretation of such tables can be enhanced greatly by computing Cohen's kappa, κ, the chance corrected percentage of agreement between actual and predicted group membership. The standard error of kappa can be used to set confidence limits for the accuracy of the discriminant prediction and to test the difference in predictive accuracy for two in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

1979
1979
2010
2010

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 1 publication
0
5
0
Order By: Relevance
“…Examination of the standardized discriminant function coefficients indicated that education and age were the most influential in determining group differences. Almost threefourths (73%) of the analysis group were correctly classified for the discriminant function (Table 3) with a significant Cohen's kappa (K = 5.6, p % 0.001) (Wiedemann and Fenster, 1978). Cross validation resulted in almost three-fourths (71%) correctly classified.…”
Section: Resultsmentioning
confidence: 96%
“…Examination of the standardized discriminant function coefficients indicated that education and age were the most influential in determining group differences. Almost threefourths (73%) of the analysis group were correctly classified for the discriminant function (Table 3) with a significant Cohen's kappa (K = 5.6, p % 0.001) (Wiedemann and Fenster, 1978). Cross validation resulted in almost three-fourths (71%) correctly classified.…”
Section: Resultsmentioning
confidence: 96%
“…Among those cases in the treatment completion group, 59.5% were correctly classified, and among those cases in the noncompletion treatment group, 65.8% were correctly classified. The Kappa coefficient, an index that accounts for chance agreements commonly used to assess the results of discriminant analysis procedures (Green & Salkind, 2005; Wiedemann & Fenster, 1978), was .24 ( p < .02), indicating fair accuracy in outcome group classification. The Wilks's lambda statistic was also used to test the differences across outcome group means for each predictor so that their effectiveness in differentiating among treatment completion/noncompletion within the model could be assessed separately.…”
Section: Resultsmentioning
confidence: 99%
“…The degree to which prediction of group membership exceeded chance was tested by means of kappa, a statistic that, when divided by its standard error, is analogous to a z score (Weidemann & Fenster, 1978). For the classification of cases in the discriminant‐function analysis of Survey I, !κ = .195, SE κ = .098, z = 3.98, p < .0001.…”
Section: Resultsmentioning
confidence: 99%