We propose a novel modeling framework for characterizing the time course of change detection based on information held in visual short-term memory. Specifically, we seek to answer whether change detection is better captured by a first-order integration model, in which information is pooled from each location, or a second-order integration model, in which each location is processed independently. We diagnose whether change detection across locations proceeds in serial or parallel and how processing is affected by the stopping rule (i.e., detecting any change versus detecting all changes; Experiment 1) and how the efficiency of detection is affected by the number of changes in the display (Experiment 2). We find that although capacity is generally limited in both tasks, architecture varies from parallel self-terminating in the OR task to serial self-terminating in the AND task. Our novel framework allows model comparisons across a large set of models ruling out several competing explanations of change detection.