The purpose of this research is to demonstrate how using natural language processing (NLP) on narrative application data can improve prediction and reduce racial subgroup differences in scores used for selection decisions compared to mental ability test scores and numeric application data. We posit there is uncaptured and job-related constructs that can be gleaned from applicant text data using NLP. We test our hypotheses in an operational context across four samples (total N = 1,828) to predict selection into Officer Training School in the U.S. Air Force. Boards of three senior officers make selection decisions using a highly structured rating process based on mental ability tests, numeric application information (e.g., number of past jobs, college grades), and narrative application information (e.g., past job duties, achievements, interests, statements of objectives). Results showed that NLP scores of the narrative application generally (a) predict Board scores when combined with test scores and numeric application information at a level of correlation equivalent to the correlation between human raters (.60), (b) add incremental prediction of Board scores beyond mental ability tests and numeric application information, and (c) reduce subgroup differences between racial minorities and nonracial minorities in Board scores compared to mental ability tests and numeric application information. Moreover, NLP scores predict (a) job (training) performance, (b) job (training) performance beyond mental ability tests and numeric application information, and (c) even job (training) performance beyond Board scores. Scoring of narrative application data using NLP shows promise in addressing the validity-adverse impact dilemma in selection.