SUMMARYMany optimization problems in engineering require coupling a mathematical programming process to a numerical simulation. When the latter is non-linear, the resulting computer time may become una ordably large because three sequential procedures are nested: the outer loop is associated to the optimization process, the middle one corresponds to the time marching scheme and the innermost loop is required for solving iteratively the non-linear system of equations at each time step. We propose four techniques for reducing CPU time. First, derive the initial values of state variables at each time (innermost loop) from those computed at the previous optimization iteration (outermost loop). Second, select time increment on the basis of those used for the previous optimization iteration. Third, deÿne convergence criteria for the simulation problem on the basis of the optimization process, so that they are only as stringent as really needed. Finally, computations associated to the optimization are shown to be greatly reduced by adopting Newton-Raphson, or a variant, for solving the simulation problem. The e ectiveness of these techniques is illustrated through application to three examples involving automatic calibration of non-linear groundwater ow problems. The total number of iterations is reduced by a factor ranging between 1·7 and 4·6.