We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.Keywords Multiple linear regression . Variable selection . Relative importance . DUPLEX . SPSS Traditional regression analysis consists of fitting an "a priori" specified model in which the predictors are (ideally) uncorrelated. In contrast to this approach, however, most applications in contemporary regression belong to the exploratory data analysis framework (e.g., Box, 1983): An initial, tentative model, possibly with correlated predictors, is proposed and evolves throughout an iterative process that concludes with a final chosen equation. The central issue in this process is how to select the predictors that will be included in the final model. Mitchell and Beauchamp (1988) have provided some of the reasons for selecting a best (in some sense) set of predictors:(1) to express the relationship between the criterion and the predictors as simply as possible; (2) to reduce future prediction cost; (3) to identify the important and the negligible predictors; or (4) to increase the precision of statistical estimates and predictions. However, the procedures that we shall discuss here are designed to select the best set of predictors and are not intended to address more complex issues such as the assessment of directional influences among the predictors, interaction effects, or suppressor effects. This may be considered to be a limitation (for a further discussion, see Bring, 1994Bring, , 1995. These issues are usually dealt with by using structural equation models.Various procedures have been proposed for finding an optimal set of Q predictors from the P potential predictorsfor example, the Akaike information criterion (Akaike, 1973), the C p criterion (Mallows, 1973), or the Bayesian information criterion (Akaike, 1978;Schwarz, 1978). These procedures are based on a comparison of all 2 P possible sets. So, when P is large, the computational requirements can be prohibitive. As a practical solution, practitioners typically use heuristic methods to reduce the number of potential predictors: stepwise selection, forward selection, or backward elimination (see, e.g., Miller, 1990, for a detailed discussion). These methods sequentially include (or exclude) predictors based on the assessment of the significant changes in R 2 . An alternative and apparently simple approach to the selection problem is to choose the most important predictors. However, as Nunnally and Bernstein (1994, pp. 191-193) n...