In this article, we introduce brief self-report and informant-report versions of the Grit Scale, which measures trait-level perseverance and passion for long-term goals. The Short Grit Scale (Grit-S) retains the 2-factor structure of the original Grit Scale (Duckworth, Peterson, Matthews, & Kelly, 2007) with 4 fewer items and improved psychometric properties. We present evidence for the Grit-S's internal consistency, test-retest stability, consensual validity with informant-report versions, and predictive validity. Among adults, the Grit-S was associated with educational attainment and fewer career changes. Among adolescents, the Grit-S longitudinally predicted GPA and, inversely, hours watching television. Among cadets at the United States Military Academy, West Point, the Grit-S predicted retention. Among Scripps National Spelling Bee competitors, the Grit-S predicted final round attained, a relationship mediated by lifetime spelling practice.
The increasing prominence of standardized testing to assess student learning motivated the current investigation. We propose that standardized achievement test scores assess competencies determined more by intelligence than by self-control, whereas report card grades assess competencies determined more by self-control than by intelligence. In particular, we suggest that intelligence helps students learn and solve problems independent of formal instruction, whereas self-control helps students study, complete homework, and behave positively in the classroom. Two longitudinal, prospective studies of middle school students support predictions from this model. In both samples, IQ predicted changes in standardized achievement test scores over time better than did self-control, whereas self-control predicted changes in report card grades over time better than did IQ. As expected, the effect of self-control on changes in report card grades was mediated in Study 2 by teacher ratings of homework completion and classroom conduct. In a third study, ratings of middle school teachers about the content and purpose of standardized achievement tests and report card grades were consistent with the proposed model. Implications for pedagogy and public policy are discussed.
Background Individual differences in subjective response to alcohol, as measured by laboratory-based alcohol challenge, have been identified as a candidate phenotypic risk factor for the development of alcohol use disorders (AUDs). Two models have been developed to explain the role of subjective response to alcohol, but predictions from the two models are contradictory, and theoretical consensus is lacking. Methods This investigation used a meta-analytic approach to review the accumulated evidence from alcohol-challenge studies of subjective response as a risk factor. Data from 32 independent samples (total N = 1,314) were aggregated to produce quantitative estimates of the effects of risk group status (i.e., positive family history of AUDs or heavier alcohol consumption) on subjective response. Results As predicted by the Low Level of Response Model (LLRM), family history positive groups experienced reduced overall subjective response relative to family history negative groups. This effect was most evident among men, with family history positive men responding more than half a standard deviation less than family history negative men. In contrast, consistent with the Differentiator Model (DM), heavier drinkers of both genders responded 0.4 standard deviations less on measures of sedation than did lighter drinkers but nearly half a standard deviation more on measures of stimulation, with the stimulation difference appearing most prominent on the ascending limb of the blood alcohol concentration curve. Conclusions The accumulated results from three decades of family history comparisons provide considerable support for the LLRM. In contrast, results from typical consumption comparisons were largely consistent with predictions of the DM. The LLRM and DM may describe two distinct sets of phenotypic risk, with importantly different etiologies and predictions for the development of AUDs.
Intelligence tests are widely assumed to measure maximal intellectual performance, and predictive associations between intelligence quotient (IQ) scores and later-life outcomes are typically interpreted as unbiased estimates of the effect of intellectual ability on academic, professional, and social life outcomes. The current investigation critically examines these assumptions and finds evidence against both. First, we examined whether motivation is less than maximal on intelligence tests administered in the context of low-stakes research situations. Specifically, we completed a metaanalysis of random-assignment experiments testing the effects of material incentives on intelligence-test performance on a collective 2,008 participants. Incentives increased IQ scores by an average of 0.64 SD, with larger effects for individuals with lower baseline IQ scores. Second, we tested whether individual differences in motivation during IQ testing can spuriously inflate the predictive validity of intelligence for life outcomes. Trained observers rated test motivation among 251 adolescent boys completing intelligence tests using a 15-min "thin-slice" video sample. IQ score predicted life outcomes, including academic performance in adolescence and criminal convictions, employment, and years of education in early adulthood. After adjusting for the influence of test motivation, however, the predictive validity of intelligence for life outcomes was significantly diminished, particularly for nonacademic outcomes. Collectively, our findings suggest that, under low-stakes research conditions, some individuals try harder than others, and, in this context, test motivation can act as a third-variable confound that inflates estimates of the predictive validity of intelligence for life outcomes.O ne of the most robust social science findings of the 20th century is that intelligence quotient (IQ) scores predict a broad range of life outcomes, including academic performance, years of education, physical health and longevity, and job performance (1-7). The predictive power of IQ for such diverse outcomes suggests intelligence as a parsimonious explanation for individual and group differences in overall competence.However, what is intelligence? Boring's now famous reply to this question was that "intelligence as a measurable capacity must at the start be defined as the capacity to do well in an intelligence test. Intelligence is what the tests test." (ref. 8, p. 35). This early comment augured the now widespread conflation of the terms "IQ" and "intelligence," an unfortunate confusion we aim to illuminate in the current investigation.Intelligence has more recently-and more usefully-been defined as the "ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought" (ref. 5, p. 77). IQ scores, in contrast, measure the performance of individuals on tests designed to assess intelligence. That is, IQ is an observed, manifest ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.