This meta-analysis addresses whether achievement goal researchers are using different labels for the same constructs or putting the same labels on different constructs. We systematically examined whether conceptual and methodological differences in the measurement of achievement goals moderated achievement goal intercorrelations and relationships with outcomes. We reviewed 243 correlational studies of self-reported achievement goals comprising a total of 91,087 participants. The items used to measure achievement goals were coded as being goal relevant (future-focused, cognitively represented, competence-related end states that the individual approaches or avoids) and were categorized according to the different conceptual definitions found within the literature. The results indicated that achievement goal-outcome and goal-goal correlations differed significantly depending on the goal scale chosen, the individual items used to assess goal strivings, and sociodemographic characteristics of the sample under study. For example, performance-approach goal scales coded as having a majority of normatively referenced items had a positive correlation with performance outcomes (r = .14), whereas scales with a majority of appearance and evaluative items had a negative relationship (r = -.14). Mastery-approach goal scales that contained goal-relevant language were not significantly related to performance outcomes (r = .05), whereas those that did not contain goal-relevant language had a positive relationship with performance outcomes (r = .14). We concluded that achievement goal researchers are using the same label for conceptually different constructs. This discrepancy between conceptual and operational definitions and the absence of goal-relevant language in achievement goal measures may be preventing productive theory testing, research synthesis, and practical application.
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed three different computer interfaces that provided students various levels of flexibility to change and review answers. No difference in scores was observed among the three modes, but students completed the least-flexible mode faster than the other two modes. It appears that less flexible test modes are faster and do not result in poorer performance than more flexible modes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.