Experimental design and data collection constitute two main steps of the iterative research cycle (aka the scientific method). To help evaluate competing hypotheses, it is critical to ensure that the experimental design is appropriate and maximizes information retrieval from the system of interest. Scientific hypothesis testing is implemented by comparing plausible model structures (conceptual discrimination) and sets of predictions (predictive discrimination). This research presents a new Discrimination-Inference (DI) methodology to identify prospective data sets highly suitable for either conceptual or predictive discrimination. The DI methodology uses preposterior estimation techniques to evaluate the expected change in the conceptual or predictive probabilities, as measured by the Kullback-Leibler divergence. We present two case studies with increasing complexity to illustrate implementation of the DI for maximizing information withdrawal from a system of interest. The case studies show that highly informative data sets for conceptual discrimination are in general those for which between-model (conceptual) uncertainty is large relative to the within-model (parameter) uncertainty, and the redundancy between individual measurements in the set is minimized. The optimal data set differs if predictive, rather than conceptual, discrimination is the experimental design objective. Our results show that DI analyses highlight measurements that can be used to address critical uncertainties related to the prediction of interest. Finally, we find that the optimal data set for predictive discrimination is sensitive to the predictive grouping definition in ways that are not immediately apparent from inspection of the model structure and parameter values.