Research on the predictive bias of cognitive tests has generally shown (a) no slope effects and (b) small intercept effects, typically favoring the minority group. Aguinis, Culpepper, and Pierce (2010) simulated data and demonstrated that statistical artifacts may have led to a lack of power to detect slope differences and an overestimate of the size of the intercept effect. In response to Aguinis et al.'s (2010) call for a revival of predictive bias research, we used data on over 475,000 students entering college between 2006 and 2008 to estimate slope and intercept differences in the college admissions context. Corrections for statistical artifacts were applied. Furthermore, plotting of regression lines supplemented traditional analyses of predictive bias to offer additional evidence of the form and extent to which predictive bias exists. Congruent with previous research on bias of cognitive tests, using SAT scores in conjunction with high school grade-point average to predict first-year grade-point average revealed minimal differential prediction (ΔR²intercept ranged from .004 to .032 and ΔR²slope ranged from .001 to .013 depending on the corrections applied and comparison groups examined). We found, on the basis of regression plots, that college grades were consistently overpredicted for Black and Hispanic students and underpredicted for female students.
The purpose of the current study was to examine the relationship between Advanced Placement (AP) exam participation and enrollment in a 4‐year postsecondary institution. A positive relationship was expected given that the primary purpose of offering AP courses is to allow students to engage in college‐level academic work while in high school, and potentially receive college credit by earning qualifying scores on the corresponding AP exam. Therefore, college preparation and planning is an implicit and explicit part of AP participation. Analyzing a national sample of over 1.5 million students, the current study found that AP participation was related to college enrollment, even after controlling for student demographic and ability characteristics and high school level predictors. For example, the odds of attending a 4‐year postsecondary institution increased by at least 171% for all three AP participation groups (taking either one AP exam, two or three AP exams, or four or more AP exams) as compared to students who took no AP exams. Given the current political environment and the renewed interest in readying high school students for college, these results may help inform and shape educational initiatives targeted at the school, district, state, or even national level.
The current study evaluated the relationship between various operationalizations of the Advanced PlacementÒ (AP) exam and course information with first-year grade point average (FYGPA) in college to better understand the role of AP in college admission decisions. In particular, the incremental validity of the different AP variables, above relevant demographic and academic variables, in predicting FYGPA was explored using hierarchical linear modeling. The AP variables of interest included the following: the number of AP exams the student took, the number of AP exams the student took and received a score of 3 or higher, the proportion of the number AP exams the student took out of the number AP courses offered at his or her high school, and his or her average AP score, highest AP score, and lowest AP score. Results showed that the AP predictor that most improved model fit was the average AP exam score. The final model that included multiple AP variables and most improved model fit included the average AP score, the number of AP exams the student took and received a score of 3 or higher, and the AP exam proportion (which had a negative relationship with FYGPA). These results are particularly relevant and timely for college admission and measurement professionals as AP course-taking information as opposed to AP exam score information tends to be more regularly factored into admission decisions if and when AP information is considered at all.
The purpose of this study was to examine the relationship between academic self-beliefs (i.e., self-efficacy and degree aspirations) with various academic outcomes. Based on previous findings, it was hypothesized that students with more positive academic self-beliefs would perform better in school. The results supported prior research as students with higher academic self-beliefs also had higher SAT scores, grades, and second-year retention rates. Students with more negative writing and math self-efficacy beliefs were more likely to state that they would desire help with improving those skills. Suggestions for those in college counseling positions to intervene and provide assistance are discussed.
In 2018, 26 states administered a college admissions test to all public school juniors. Nearly half of those states proposed to use those scores as their academic achievement indicators for federal accountability under the Every Student Succeeds Act (ESSA); many others are planning to use those scores for other accountability purposes. Accountability encompasses a number of different uses and subsumes a variety of claims. For states proposing to use summative tests for accountability, a validity argument needs to be developed, which entails delineating each specific use of test scores associated with accountability, identifying appropriate evidence, and offering a rebuttal to counterclaims. The aim of this article is to support states in developing a validity argument for use of college admission test scores for accountability by identifying claims that are applicable across states, along with summarizing existing evidence as it relates to each of these claims. As outlined by The Standards for Educational and Psychological Testing, multiple sources of evidence are used to address each claim. A series of threats to the validity argument, including weaker alignment with content standards and potential influences in narrowing teaching, are reviewed. Finally, the article contrasts validity evidence, primarily from research on the ACT, with regulatory requirements from ESSA. The Standards and guidance addressing the use of a “nationally recognized high school academic assessment” (Elementary and Secondary Education Act (ESEA), Negotiated Rulemaking Committee; Department of Education) are the primary sources for the organization of validity evidence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.