We study the classical newsvendor problem in which the decision maker must trade off underage and overage costs. In contrast to the typical setting, we assume that the decision maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient regime of learning, that is, performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared with an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical sample average approximation (SAA) algorithm for this class of problems across all data sizes. This allows one to uncover novel fundamental insights on the value of data: It reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm (in the minimax sense) and characterize its associated performance. This leads to significant improvements for limited data sizes and allows to exactly quantify the value of historical information. This paper was accepted by David Simchi-Levi, data science. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2023.4725 .
We study the classical newsvendor problem in which the decision-maker must trade-off underage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient of learning, i.e., performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared to an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical Sample Average Approximation (SAA) algorithm for this class of problems across all data sizes. This allows to uncover novel fundamental insights on the value of data: it reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm as well as characterize its associated performance. This leads to significant improvements for limited data sizes, and allows to exactly quantify the value of historical information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.