1. A Pareto Dominance Principle for Data-Driven Optimization.
- Author
-
Sutter, Tobias, Van Parys, Bart P. G., and Kuhn, Daniel
- Subjects
PROBABILITY measures ,ROBUST optimization ,STOCHASTIC processes ,LARGE deviations (Mathematics) ,PARETO principle - Abstract
Our paper proposes an effective way to make decisions based on data for uncertain situations. In simple terms, a data-driven decision is just a choice we make by looking at the available data. We express this choice as the best one according to a model we create from the data. The quality of this decision is judged by how well it performs in situations not seen during training. We also consider how often it disappoints in those situations. The challenge is that we do not know the exact probability of generating the data. An ideal data-driven decision should work well for any possible probability. However, such ideal decisions are usually not possible. Therefore, we look for decisions that work well on unseen data, considering the chances of disappointment. We prove that such effective decisions exist under certain conditions, allowing for practical applications. This approach holds regardless of whether the original problem is simple or complex, and it works even when the data are not uniformly collected. Our study also uncovers how the characteristics of the data-generating process influence the optimal decision-making model. We propose a statistically optimal approach to construct data-driven decisions for stochastic optimization problems. Fundamentally, a data-driven decision is simply a function that maps the available training data to a feasible action. It can always be expressed as the minimizer of a surrogate optimization model constructed from the data. The quality of a data-driven decision is measured by its out-of-sample risk. An additional quality measure is its out-of-sample disappointment, which we define as the probability that the out-of-sample risk exceeds the optimal value of the surrogate optimization model. The crux of data-driven optimization is that the data-generating probability measure is unknown. An ideal data-driven decision should therefore minimize the out-of-sample risk simultaneously with respect to every conceivable probability measure (and thus in particular with respect to the unknown true measure). Unfortunately, such ideal data-driven decisions are generally unavailable. This prompts us to seek data-driven decisions that minimize the in-sample risk subject to an upper bound on the out-of-sample disappointment—again simultaneously with respect to every conceivable probability measure. We prove that such Pareto dominant data-driven decisions exist under conditions that allow for interesting applications: The unknown data-generating probability measure must belong to a parametric ambiguity set, and the corresponding parameters must admit a sufficient statistic that satisfies a large deviation principle. If these conditions hold, we can further prove that the surrogate optimization model generating the optimal data-driven decision must be a distributionally robust optimization problem constructed from the sufficient statistic and the rate function of its large deviation principle. This shows that the optimal method for mapping data to decisions is, in a rigorous statistical sense, to solve a distributionally robust optimization model. Maybe surprisingly, this result holds irrespective of whether the original stochastic optimization problem is convex or not and holds even when the training data are not independent and identically distributed. As a byproduct, our analysis reveals how the structural properties of the data-generating stochastic process impact the shape of the ambiguity set underlying the optimal distributionally robust optimization model. Funding: This research was supported by the Swiss National Science Foundation under the NCCR Automation [Grant Agreement 51NF40_180545]. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2021.0609. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF