Brief experimental analysis (BEA) is a well-researched approach to conducting problem analysis, where potential interventions are pilot tested using a single-subject alternating treatment design. However, its brevity may lead to a high frequency of decision-making errors, particularly in situations where one tested condition is rarely optimal for students (i.e., the base rate). The current study explored the accuracy of a specific variant of BEA, skill versus performance deficit analysis (SPA), across different variations of the basic BEA design, score difference thresholds, and reading and math curriculum-based measurements (CBMs). Findings indicate that the ABAB design provides a reasonable control of such error rates when using reading CBM, whereas subtraction CBM required the use of an ABABAB design. Such error rates could not be controlled, regardless of design, when using multiplication CBM. Implications for best practice in the use of BEA are discussed.