Simple and conditional discrimination training programs are commonly presented in simultaneous discrimination arrangements, which include an array of two or more comparisons. Several descriptive studies have found that these arrangements may be associated with low levels of procedural integrity, which may also affect or even prevent learning. Automated systems may offer several advantages to human-implemented procedures, although computer- or tablet-based paradigms are rarely described in the skill acquisition literature. The current study used a modified version of
Smith and Greer’s (2022) validation model to evaluate the technical adequacy of Testable, a web-based software program used to conduct behavioral and psychological experiments, when presenting simple and conditional discrimination tasks. The findings of the five-step model suggested that Testable successfully executed all experimenter-defined conditions and parameters. Moreover, the program reliably recorded response latency, correct responding, and response position. These findings are considered along with recent discussion regarding procedural integrity risk classifications in behavior analytic research.