The inclusion of Positive Behavioral Intervention and Supports as a type of applied behavior analysis has often spurred considerable controversy. This controversy seems related to at least three areas of contention: measurement issues, views on aversive control, and the overall target of treatment. The present paper discusses these three areas in light of Horner and Sugai's target article and the current state of the field of applied behavior analysis. In the end, the authors are left wondering why this is even an ongoing controversy.Keywords Prevention . Positive behavioral intervention . Punishment . Methodology Some time back, certain behavior analytic practices came to be known as Positive Behavioral Interventions and Supports (PBIS). These applied practices, which focused on the use of positive reinforcement to the exclusion of aversive control, often incorporated person-centered planning processes in the core conceptualization of individual treatment and often focused on the class-or school-wide supports (Carr et al. 2002). PBIS was not a movement by fringe behavioral scientists, but rather from pillars of the behavior analytic community welltrained in behavior analytic principles and their application (e.g., Horner, Carr). Horner and Sugai's (2015) paper suggests that many of these scientists still identify with applied behavior analysis (ABA)-this is a good thing. Variability is good for our science (Madden 2013), and our hope is that the orthodoxy among some applied behavior analysts does not damage this ongoing relationship. In our experience, there are at least three areas of controversy.Area one is methodological. Horner and Sugai note that much measurement in PBIS pulls directly from the behavior analytic tradition. Many applied behavior analysts, however, criticize the proxy measures used in PBIS research (e.g., office referrals). They argue that these measures run contrary to the tradition of direct observation cherished by many applied behavior analysts. Although direct observation often is preferred to proxy measurement for determining intervention effects, the sheer scale of measurement, the topographical variation in the functional response classes targeted, and the risk of reactivity in many PBIS programs make direct observation impractical. These factors leave the researcher with two options: (1) do not attempt to answer questions of such large scope or scale, or (2) find a reasonable yet imperfect way to measure the effects of the independent variable on a behavior of relevance to consumers. In PBIS, the consumer is often a school or school district for which the orderly management of a classroom or other school environment is of primary interest. Much like a clean urinalysis reading suggests the absence of drug use in a participant in an intervention using contingency management, the fewer office referrals suggest less challenging behavior. We have little to say in favor of Option 1 as it betrays Skinner's all-encompassing vision for his science of behavior (Skinner 1954;1948) and...