Statistical decision theory is a set of techniques for making optimal decisions in the setting of uncertainty. Both frequentist and Bayesian approaches have been developed. The definition of a Bayesian decision problem requires the specification of an unknown parameter and its prior probability density function (pdf), observed data (if any), a likelihood function, a set of possible decisions or actions, and a loss or utility function whose expectation is to be optimized. The expected loss can be defined in a number of ways, yielding a frequentist risk, the conditional Bayes risk, and the Bayes risk. The goal is generally to select a decision rule that maps the possible observed data to a set of actions so that the Bayes risk is minimized. Alternatively, a frequentist may seek to minimize the maximum frequentist risk that might be experienced over all possible fixed values of the unknown parameter, yielding the optimal “minimax” decision rule. Examples in the chapter include a problem of selecting drug therapy when the patient diagnosis is unclear and the decision to order a test to determine the disease present, choosing a sample size when the goal is to estimate an unknown mean of a normally distributed variable, sequential stopping rules determined through back induction, and sequential allocation or assignment (bandits).