The computer simulation/mathematical model called DMOD, which can simulate over 35 different phenomena in appetitive discrete-trial and simple free-operant situations, has been extended to include aversive di~rete-trial situations. Learning (V) is calculated using a three-parameter equation .1V = a{j(A -V) (see Daly & Daly, 1982;Rescorla & Wagner, 1972). The equation is applied to three possible goal events in the appetitive (e.g., food)case and to three in the aversive (e.g., shock) case. The original goal event can be present, absent, or reintroduced; in the appetitive situation, these events condition approach (Vap), avoidance (Vav), and courage (Vee), respectively. In the aversive situation, the events condition avoidance (Vav*), approach (Vap*), and cowardice (Vcc*), respectively. The model was developed in simple learning situations and subsequently was applied to complex situations. It can account for such diverse phenomena as contrast effects after reward shifts, greater persistence following partial than following continuous reinforcement, and a preference for predictable appetitive and predictable aversive events. Application of the aversive version of the model to "reward" shifts is described.Our goal is to develop a computer simulation/mathematical model of learning that is as simple as possible with as much breadth as possible. The model, called DMOD (Daly MODification of the Rescorla-Wagner Model), is simple because learning is calculated with one simple equation using three parameters. It is applied to diverse situations by assuming that there are a number of different goal events possible, each of which conditions either approach or avoidance of the goal. It was originally developed to account for appetitive, discrete-trial experiments, and can currently account for behavior in over 30 different paradigms (see Daly & Daly, 1982). DMOD was then extended to simulate the effect of simple schedules of reinforcement in free-operant experiments (Daly & Daly, 1984b). Our purpose is to outline the extension of DMOD to the aversive case and to show its application in a complicated experimental situation. To understand the rationale behind the extension, however, it is neces- sary to review development of the model in the appetitive case and the rules we follow for development of DMOD.
SELECTION OF PHENOMENA TO BE SIMULATEDWe believe that the initial goal of a new model is to be able to account for well-established and replicable phenomena. We feel that it is dangerous to develop a model around recently discovered phenomena, because the boundary conditions under which they can be obtained and the variables influencing them are unknown. The purpose of a theory, however, is not only to integrate existing replicable data, but also to correctly predict new results. The primary phenomenon we use to test predictions of the appetitive version of DMOD is acquisition of a preference for predictable reward. The model made some interesting predictions concerning when a preference for the unpredictable reward situati...