1996
DOI: 10.1080/00220973.1996.9943806
|View full text |Cite
|
Sign up to set email alerts
|

Estimating the Coefficient of Cross-Validity in Multiple Regression: A Comparison of Analytical and Empirical Methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
28
0

Year Published

2002
2002
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(29 citation statements)
references
References 17 publications
1
28
0
Order By: Relevance
“…In previous studies of statistical bias, researchers have operationally defined unbiased estimators as those estimators which produce values within +0.01 and -0.01 of the corresponding parameter values (Kromrey & Hines, 1996;Yin & Fan, 2001). We used the same criterion in the present study (although Wang & Thompson, 2007, did not) to describe an unbiased estimate.…”
Section: Discussionmentioning
confidence: 93%
“…In previous studies of statistical bias, researchers have operationally defined unbiased estimators as those estimators which produce values within +0.01 and -0.01 of the corresponding parameter values (Kromrey & Hines, 1996;Yin & Fan, 2001). We used the same criterion in the present study (although Wang & Thompson, 2007, did not) to describe an unbiased estimate.…”
Section: Discussionmentioning
confidence: 93%
“…As noted by Kromrey and Hines (1996), regression weights that are developed in one sample and applied to a new sample will almost always yield a smaller explained variance coefficient of effect size. Pedhazur (1982) recommended the splitting of one sample into two samples, in the absence of multiple samples, to address effect size variation issues.…”
Section: Samplementioning
confidence: 97%
“…Thus, these are the three factors invoked in the statistical correction for the positive bias in uncorrected effect sizes, which attempt to remove this bias and produce more accurate effect estimates (see Kromrey and Hines, 1996). Examples of such estimates are the regression "adjusted R 2 " and the ANOVA omega 2 ( 2 ) and epsilon 2 (⑀ 2 ).…”
Section: Corrected Variance-accounted-for Statisticsmentioning
confidence: 99%