2005
DOI: 10.22237/jmasm/1114906920
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Evaluation Of The Retrospective Pretest: Are There Advantages To Looking Back?

Abstract: This article builds on research regarding response shift effects and retrospective self-report ratings. Results suggest moderate evidence of a response shift bias in the conventional pretest-posttest treatment design in the treatment group. The use of explicitly worded anchors on response scales, as well as the measurement of knowledge ratings (a cognitive construct) in an evaluation methodology setting, helped to mitigate the magnitude of a response shift bias. The retrospective pretest-posttest design provid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…Nakonezny and Rodgers (2005) determined that in their study, retrospective pretests were more comparable to objective measures of change but that response-shift bias was moderated by explicitly worded anchors on the response scale. Other studies have shown that response-shift bias is also smaller when respondents provide knowledge or behavioral ratings rather than attitude ratings and when questions and response anchors are explicit and clear (Bornstein, Putnick, Costlow, & Suwalsky, 2018; Collins et al, 1985; Hill & Betz, 2005).…”
Section: Common Mistakes and Pitfalls In The Literature About Retrospective Pretestsmentioning
confidence: 96%
See 1 more Smart Citation
“…Nakonezny and Rodgers (2005) determined that in their study, retrospective pretests were more comparable to objective measures of change but that response-shift bias was moderated by explicitly worded anchors on the response scale. Other studies have shown that response-shift bias is also smaller when respondents provide knowledge or behavioral ratings rather than attitude ratings and when questions and response anchors are explicit and clear (Bornstein, Putnick, Costlow, & Suwalsky, 2018; Collins et al, 1985; Hill & Betz, 2005).…”
Section: Common Mistakes and Pitfalls In The Literature About Retrospective Pretestsmentioning
confidence: 96%
“…For example, the samples in both studies were large: for Study 1, authors reported that their analysis included “4,713 student responses collected across three time points” (Little et al, 2019, p. 3), which presumably describes a sample of about 2000 respondents, and Study 2 had 1699 participants. Samples of this size would be ideal for creating a control group or a set of control groups (e.g., a Solomon four-group design) to explore and control for pretesting effects on posttests (Nakonezny & Rodgers, 2005; Sprangers & Hoogstraten, 1989) as well as to examine the effects on posttests of including a retrospective pretest. Also, if the program did include algebra training and there were progress tests in algebra skills, the examination of change scores calculated from both traditional and retrospective pretests would have provided important validation of the method in assessment above and beyond attitude change.…”
Section: Common Mistakes and Pitfalls In The Literature About Retrospective Pretestsmentioning
confidence: 99%
“…administering both the pre-and post-test questions after the intervention) provides a more accurate assessment of change than a conventional pretestposttest design (i.e. administering pre-test before and post-test after), because it allows the respondent to use a consistent scale when answering questions about both the present and past (Nakonezny & Rodgers, 2005).…”
Section: Survey Content and Analysismentioning
confidence: 99%
“…The RPP is, therefore, a highly recommended alternative approach to the traditional pretest-posttest design (Blome & Augustin, 2015;Hill & Betz, 2005). Research using a variety of measures indicates that pretest data collected at the posttime provide a highly reliable and valid reflection of participants' true preintervention levels and thereby provide very precise estimation of participants' perceived changes due to the program effects (e.g., J. M. Allen & Nimon, 2007;Cohen, 2016;Nakonezny & Rodgers, 2005).…”
Section: Benefits Of the Rpp Designmentioning
confidence: 99%