“…Several methods for modeling ambiguity sets have been proposed relying on discrete distributions [14,15], moment constraints [2,16,17], Kullback-Leibler divergence [18,19], Prohorov metric [20], and the Wasserstein distance [21,22,1], among others. Of particular interest in distributionally-robust optimization is the family of data-driven distributionally-robust optimization, where the ambiguity set is parameterized based on samples of the distribution [23,20,1,13]. These studies have proved to be of extreme importance in machine learning [24,25,3,26,27].…”