When the underlying probability distribution in a stochastic optimization is observed only through data, various data-driven formulations have been studied to obtain approximate optimal solutions. We show that no such formulations can, in a sense, theoretically improve the statistical quality of the solution obtained from empirical optimization. We argue this by proving that the first-order behavior of the optimality gap against the oracle best solution, which includes both the bias and variance, for any data-driven solution is second-order stochastically dominated by empirical optimization, as long as suitable smoothness holds with respect to the underlying distribution. We demonstrate this impossibility of improvement in a range of examples including regularized optimization, distributionally robust optimization, parametric optimization and Bayesian generalizations. We also discuss the connections of our results to semiparametric statistical inference and other perspectives in the data-driven optimization literature.