We consider the problem of approximating the expected recourse function for two-stage stochastic programs. Our problem is motivated by applications that have special structure, such as an underlying network that allows reasonable approximations to the expected recourse function to be developed. In this paper, we show how these approximations can be improved by combining them with sample gradient information from the true recourse function. For the case of strictly convex nonlinear approximations, we prove convergence for this hybrid approximation. The method is attractive for practical reasons because it retains the structure of the approximation.A common problem in operations research is the challenge of making a decision now in a way that minimizes the expectation of costs in the future that depend on random events. For example, we may have the situation of determining how much product to ship from plants to warehouses from which we then satisfy demands at di erent retailers. We must decide how much product to ship to each warehouse before we know the retail demand. Once retail demands are known, we are able to optimize shipping patterns between warehouses and retailers.This problem, and many like it, can be posed as two-stage stochastic programs. The decision made now (in stage 1) determines what state we are in when we have to solve the problem in stage 2. If we could exactly capture the structure of the expected cost function (or recourse function) for stage 2, we would be able to make optimal decisions now. The di culty is that in most cases, the structure of this expected cost function is too complex.There is an extensive literature on two-stage stochastic programming problems, which is nicely summarized in several recent books (Infanger 1994 andKall andWallace 1994). General solution methods include scenario optimization (e.g., Rockafellar and Wets 1991), stochastic gradient techniques (e.g., Ermoliev 1983, Ruszczynski 1980, Benders decomposition and its variants (e.g., Van Slyke and Wets 1969, Birge 1985, and Higle and Sen 1991, sample path optimization (Robinson 1996), and other approximation techniques (e.g., Beale et al. 1980). These techniques are, for the most part, very general and are not designed speciÿcally to take advantage of approximations that may produce good but not optimal solutions.In this paper, we propose a new algorithm called SHAPE (successive hybrid approximation procedure) that combines an initial nonlinear approximation with iteratively sampled stochastic gradient information. The initial nonlinear approximation can exploit problem structure, while the stochastic gradient information, which is easy to obtain for most problems, tunes the approximation. We describe the algorithm, provide a small numerical illustration, and prove convergence.Section 1 provides a formal problem statement and introduces the basic idea behind the algorithm. Section 2 presents a method we call the stochastic hybrid approximation procedure (SHAPE) for solving two-stage stochastic programs with recourse. ...