Quantitative literacy has been identified as an important student learning outcome (SLO) by both the higher education and workforce communities. This paper aims to provide preliminary evidence of the psychometric quality of the pilot forms for HEIghten quantitative literacy, a next-generation SLO assessment for students in higher education. We evaluated the psychometric quality of the test items (e.g., item analyses), individual-and group-level reliability, the relationship with student performance and related variables (e.g., grade point average) as well as student perceptions, and differences across college-related and demographic subgroups. Our study used data from a pilot test administered to over 1,500 students at 23 higher education institutions in the United States. Results showed that (a) overall, items were functioning well, but a small portion of items should be dropped due to unsatisfactory performance; (b) correlations across sub-areas of the assessment were very high indicating that the assessment may be unidimensional; (c) reliability estimates similar to existing SLO assessments were found at both individual and group levels; (d) assessment scores correlated positively with high school and college GPA, number of math college courses, self-rated quantitative literacy skills, and college admissions scores; (e) students had positive perceptions about the assessment; and (f) performance differences were found across institution type, college majors, gender, racial/ethnic groups, and language groups, but not across credit-hour categories. Implications for operational test development and understanding of quantitative literacy performance are discussed. Zhen Wang is a Senior Psychometrician at the Educational Testing Service (ETS). She holds a Ph.D. in Educational Measurement and Research Methodology from the University of British Columbia. Currently, she is the statistical coordinator for ETS' HEIghten® Outcomes Assessment Suite, Major Field Test, Success Navigator, Proficiency Profile, and iSkills projects. She also works on the research projects related to automated scoring and higher education. Her research interests include item response theory, structural equation models, hierarchical linear models, rater models, and equating and scaling techniques.