Modeling and simulation are being relied upon in many fields of science and engineering as computational surrogates for experimental testing. To justify the use of these simulations for decision making, however, it is critical to determine, and when necessary mitigate, the biases and uncertainties in model predictions, a task that invariably requires validation experiments. To use experimental resources efficiently, validation experiments must be designed to achieve the maximum possible increases in model predictive ability through the calibration of the model against experiments. This need for efficiency is addressed by the concept of optimally designing validation experiments, which constitutes optimizing a predefined criterion while selecting the settings of experiments. This paper presents an improved optimization criterion that incorporates two important factors for the optimal design of validation experiments: (1) how well the model reproduces the validation experiments, and (2) how well the validation experiments cover the domain of applicability. The criterion presented herein selects the appropriate settings for future experiments with the goal of achieving a desired level of predictive ability in the computer model through the use of a minimal number of validation experiments. The criterion explores the entirety of the application domain by including the effect of coverage, and exploits areas of the domain with high variability by including the effect of empirically defined discrepancy bias. The effectiveness of this new criterion is compared with two well-established criteria through a simulated case study involving the stress-strain response and textural evolution of polycrystalline materials. The proposed criterion is demonstrated as efficient at improving the predictive capabilities of the numerical model, particularly when the amount of experimental data available for validation is low.