Sequential ranking-and-selection procedures deliver Bayesian guarantees by repeatedly computing a posterior quantity of interest—for example, the posterior probability of good selection or the posterior expected opportunity cost—and terminating when this quantity crosses some threshold. Computing these posterior quantities entails nontrivial numerical computation. Thus, rather than exactly check such posterior-based stopping rules, it is common practice to use cheaply computable bounds on the posterior quantity of interest, for example, those based on Bonferroni’s or Slepian’s inequalities. The result is a conservative procedure that samples more simulation replications than are necessary. We explore how the time spent simulating these additional replications might be better spent computing the posterior quantity of interest via numerical integration, with the potential for terminating the procedure sooner. To this end, we develop several methods for improving the computational efficiency of exactly checking the stopping rules. Simulation experiments demonstrate that the proposed methods can, in some instances, significantly reduce a procedure’s total sample size. We further show these savings can be attained with little added computational effort by making effective use of a Monte Carlo estimate of the posterior quantity of interest. Summary of Contribution: The widespread use of commercial simulation software in industry has made ranking-and-selection (R&S) algorithms an accessible simulation-optimization tool for operations research practitioners. This paper addresses computational aspects of R&S procedures delivering finite-time Bayesian statistical guarantees, primarily the decision of when to terminate sampling. Checking stopping rules entails computing or approximating posterior quantities of interest perceived as being computationally intensive to evaluate. The main results of this paper show that these quantities can be efficiently computed via numerical integration and can yield substantial savings in sampling relative to the prevailing approach of using conservative bounds. In addition to enhancing the performance of Bayesian R&S procedures, the results have the potential to advance other research in this space, including the development of more efficient allocation rules.
Ever since the conception of the statistical ranking-and-selection (R8S) problem, a predominant approach has been the indifference-zone (IZ) formulation. Under the IZ formulation, R8S procedures are designed to provide a guarantee on the probability of correct selection (PCS) whenever the performance of the best system exceeds that of the second-best system by a specified amount. We discuss the shortcomings of this guarantee and argue that providing a guarantee on the probability of good selection (PGS)—selecting a system whose performance is within a specified tolerance of the best—is a more justifiable goal. Unfortunately, this form of fixed-confidence, fixed-tolerance guarantee has received far less attention within the simulation community. We present an overview of the PGS guarantee with the aim of reorienting the simulation community toward this goal. We examine numerous techniques for proving the PGS guarantee, including sufficient conditions under which selection and subset-selection procedures that deliver the IZ-inspired PCS guarantee also deliver the PGS guarantee.
This paper introduces a major redesign of SimOpt, a testbed of simulation-optimization (SO) problems and solvers. The testbed promotes the empirical evaluation and comparison of solvers and aims to accelerate their development. Relative to previous versions of SimOpt, the redesign ports the code to an object-oriented architecture in Python; uses an implementation of the MRG32k3a random number generator that supports streams, substreams, and subsubstreams; supports the automated use of common random numbers for ease and efficiency; includes a powerful suite of plotting tools for visualizing experiment results; uses bootstrapping to obtain error estimates; accommodates the use of data farming to explore simulation models and optimization solvers as their input parameters vary; and provides a graphical user interface. The SimOpt source code is available on a GitHub repository under a permissive open-source license and as a Python package. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: This work was supported by the National Science Foundation [Grant CMMI-2035086]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1273 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2022.0011 ) at ( http://dx.doi.org/10.5281/zenodo.7468744 ).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.