Most decisions based on predictions from a model have uncertain outcomes. The uncertainty may be exogenous or endogenous to the modeled process, or both, and can greatly affect the degree to which the decision-maker's goals are met. In this thesis I study optimal control problems for systems with endogenous uncertainty that can be reduced by manipulating the system input. When the decision-maker can improve performance through actively reducing uncertainty ("active learning"), there is a dual nature to the optimal sequence of decisions; the decisions or inputs must direct the process toward the desired state and also ensure that information-rich data be generated so that decision-relevant uncertainty is resolved.The dual-control problem can be defined as that of minimizing the expected output errorwhere y * is the output reference, Y (t + N − 1 | t) represents future information up to time t + N − 1 in addition to all past information, and the model for the system output y is not fully known. My coworkers and I propose a novel reformulation technique for a probabilisticallyconstrained stochastic dual-control problem and show that the optimal strategy for minimizing this cost function involves active exploration of the plant to generate informative data. It is consequently necessary that the model that informs the decision-making include how future data resolve uncertainty. The reformulation permits practical algorithms for true dual control for a class of systems, allows new interpretation of earlier approaches, and may guide approximate dual-control designs where no exact results are obtainable. In addition to useful algorithms, this thesis contains a number of conceptual insights. In particular, the most recent results provide a foundation from which we argue that the conventional dual-control interpretation involving a trade-off between control and exploration is a false dichotomy. These derivations clearly show, for a specific system class, that control of the nominal model and a specific form of uncertainty reduction are both necessary components of the optimal control, vii as opposed to separate entities in which uncertainty reduction can be sacrificed for increased control performance.I consider the approaches to dual control presented in this thesis as falling into one of the following three categories: minimization of (i) a heuristic objective that is different from, yet still reduces, the dual objective ( * ); (ii) a systematic approximation of ( * ); and (iii) an exact reformulation of ( * ). This main contributions in this thesis are taken from the following three papers: Each of the three main papers introduce control designs that involve solving a finite-horizon optimal-control problem (O.C.P.) at every sampling instant, with the initial state set to the current state of the plant. Common to the algorithms is their foundation in model-predictive control (M.P.C.). The approaches each involve augmenting the standard O.C.P. in M.P.C. with cost-function terms and constraints, with the result that the c...