which is obtained from (1.1) by replacing E ω [v(ω, x)] by Q(x). The advantage is that (1.2) features a convex objective function, and thus it is easier to solve compared to the original model (1.1). Of course, without further restrictions on Q, we do not expect to obtain good solutions for (1.1) by solving the approximating every first-stage decision x, the second-stage costs v(ω, x) are finite with probability 1. To motivate this, note that if v(ω, x) = +∞ with positive probability, then the decision x may result in irreparable infeasibilities with respect to the random goal constraints, and thus should be considered infeasible. This situation is undesirable from a computational point of view, and thus we exclude it by assuming complete recourse, see Definition 1.1.Definition 1.1. The recourse is complete if and only if for every s ∈ R m , there exists a y ∈ Y such that Wy ≥ s. Then, v(ω, x) < +∞ for every ω ∈ Ω and x ∈ R n .Assume that the recourse is complete and sufficiently expensive, and the random data (h(ω), T(ω)) satisfy the weak covariance condition.(i) Q is a finite-valued, convex, continuous, and subdifferentiable function onwhere λ ω is a vector of optimal dual multipliers of the second-stage problem (1.6)The convexity of Q in Theorem 1.2 enables the use of techniques from convex optimization to efficiently solve continuous recourse models, see also Section 1.3.1.If integer restrictions are imposed on the recourse actions y, however, then convexity of Q is lost, see, e.g., [54], resulting in significant computational challenges.From the perspective of computational complexity, however, the difficulties posed by integer recourse actions are dominated by those caused by the curse of is a linear program and thus can be solved efficiently. Moreover, (MP) is a relaxation of the original problem (1.7), and thus, if an optimal solution ( x, θ) of (MP) is feasible in (1.7), i.e., if θ ≥ Q( x), then ( x, θ) is also optimal in (1.7). If, on the other hand, θ < Q( x), then we add a constraint θ ≥ ψ(x) to (MP), in order to cut away ( x, θ), and we resolve (MP).A prime example of Benders' decomposition for MIR models is the L-shaped method by Van Slyke and Wets [84], which solves continuous recourse problems by exploiting the convexity of Q in Theorem 1.2. In particular, if u ∈ ∂Q( x) is a subgradient of Q at x, then an optimality cut for Q is given bywhich is tight for Q at x, i.e., ψ( x) = Q( x). Moreover, we can efficiently obtain ψ by using the expression for a subgradient of Q in Theorem 1.2. Finally, if ω follows a finite discrete distribution, then finite convergence can be established under mild conditions [43]. Indeed, we only need a finite number of optimality cuts to completely describe Q, because Q is a convex polyhedral function.The L-shaped method cannot be used to solve general two-stage MIR models, because convexity of Q is lost if integer restrictions are imposed on the recourse actions [54]. Typically, Benders' decomposition algorithms that solve general MIR models combine ideas from the L-sha...