A computational model of nervous system function during classical and instrumental conditioning is proposed. The model assumes the form of a hierarchical network of control systems. Each control system is capable of learning and is referred to as an associative control process (ACP). Learning systems consisting of ACP networks, employing the drive-reinforcement learning mechanism (Klopf, 1988) and engaging in real-time, closed-loop, goal-seeking interactions with environments, are capable of being classically and instrumentally conditioned, as demonstrated by means of computer simulations. In multiple-T mazes, the systems learn to chain responses that avoid punishment and that lead eventually to reward. The temporal order in which the responses are learned and extinguished during instrumental conditioning is consistent with that observed in animal learning. Also consistent with animal learning experimental evidence, the ACP network model accounts for a wide range of classical conditioning phenomena. ACP networks, at their current stage of development, are intended to model sensorimotor, limbic, and hypothalamic nervous system function, suggesting a relationship between classical and instrumental conditioning that extends Mowrer's (1956, 1960a/1973) two-factor theory of learning. In conjunction with consideration of limbic system and hypothalamic function, the role of emotion in natural intelligence is modeled and discussed. ACP networks constitute solutions to temporal and structural credit assignment problems, suggesting a theoretical approach for the synthesis of machine intelligence.