THE DIVERSE AND EVER-CHANGING list of technologies encompassed by computer-assisted language learning (CALL) presents evaluators with a challenging moving target. At a time when CALL can include everything from school-based telecollaborative projects to Massive Online Open Courses (MOOCs), to smartphone-and tablet-based apps, previous approaches to evaluation reveal their inadequacies. The checklists that once predominated (see Susser 2001) assumed the focus of evaluation to be tutorial software known as "courseware," which now constitutes a much-diminished part of the CALL landscape. Methodological frameworks like the one proposed by Hubbard (2006) assume the role of an instructor and a course in which the technology is situated, which is countered by increasingly autonomous and self-directed applications of CALL (Reinders and White 2016). And sets of criteria based in principles of interactionist second language acquisition (SLA; Chapelle 2001) seem less than ideal for evaluating technologies designed for individual use whose function is primarily facilitative of second language (L2) usage, such as online dictionaries or translation tools. Disciplines