As psychologists we often make consequential decisions ourselves, and we help other professionals, such as HR managers and admissions officers, to make decisions. To this end, we construct instruments and provide guidelines. Take the following examples: An admissions officer uses standardized tests, interview impressions, and letters of recommendation to admit students. An organizational psychologist uses cognitive ability tests, resumes, self-report questionnaires, and assessment centers to hire new senior managers. A clinical psychologist uses standardized assessments and behavioral observations to advise whether to release prisoners on parole. In each of these situations, decision makers collect, in line with recommendations by, for example, the International Test Commission (2001, p. 104), multiple pieces of information that they combine to make predictions and decisions. In most situations, the primary goal of collecting and combining multiple pieces of information is to optimize the prediction of future human performance or behavior, resulting in the best possible decisions. For example, universities want to select students who will show the best academic performance (Kuncel & Hezlett, 2007;Zwick, 2007), and organizations want to select applicants who will perform the best on the job (Lievens et al., 2021;Schmidt & Hunter, 1998). To optimize those predictions and decisions, two important questions are:1) What information should be collected, and how?2) How should that information be combined?Many studies have been conducted to provide answers to these questions, and their results mostly provide coherent and robust conclusions (Kuncel et al., 2013;Lievens et al., 2021;Milkman et al., 2009;Sackett et al., 2022). Yet, decision makers often do not follow the recommendations from these studies and, instead, use suboptimal instruments and procedures to collect and combine information. Thus, there is a substantial science-practice gap (Highhouse, 2008). To reduce this gap, the primary aim of this thesis was to investigate how decision makers can be encouraged to use evidence-based assessment and decision-making procedures, and how that affects predictive validity.The research presented in this thesis is relevant in many contexts where information is collected and combined to make decisions. Such decisions include, among others, medical diagnoses and treatment decisions, child custody and asylum decisions, bail decisions, investment decisions, and decisions to grant patents (Grove et al., 2000;Kahneman et al., 2021). I focused on human performance prediction in hiring and admissions decisions because they represent two societally important decisions. Furthermore, archival applicant data containing various predictors and criteria were Human performance predictions are moderately valid when decision makers combine information holistically in their head (Morris et al., 2015). Yet, substantial evidence shows that much more valid performance predictions are made when information is combined algorithmically rather than h...