As digital technologies permeate every aspect of our lives, the complexity of the educational settings, and of the technological support we use within them, unceasingly rises. This increased complexity, along with the need for educational practitioners to apply such technologies within multi-constraint authentic settings, has given rise to the notion of technology-enhanced learning practice as “orchestration of learning”. However, at the same time, the complexity involved in evaluating the benefits of such educational technologies has also increased, prompting questions about the way evaluators can cope with the different places, technologies, informants and issues involved in their evaluation activity. By proposing the notion of “orchestrating evaluation”, this paper tries to reconcile the often disparate “front office accounts” of research publications and the “shop floor practice” of evaluation of educational technology, through the case study of evaluating a system to help teachers in coordinating computer-supported collaborative learning scenarios (GLUE!-PS). We reuse an internationally-evaluated conceptual framework of “orchestration aspects” (design, management, adaptation, pragmatism, etc.) to structure the case’s narrative, showing how the original evaluation questions and methods were modulated in the face of the multiple (authentic) evaluation setting constraints.